From patchwork Tue Sep 14 20:37:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 1528128 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=irrelevant.dk header.i=@irrelevant.dk header.a=rsa-sha256 header.s=fm1 header.b=AIa2/WGE; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=EsPLpzxH; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4H8Fcn4K9Pz9sRN for ; Wed, 15 Sep 2021 06:41:29 +1000 (AEST) Received: from localhost ([::1]:55676 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mQFF8-0006JJ-VK for incoming@patchwork.ozlabs.org; Tue, 14 Sep 2021 16:41:27 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54904) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFBg-0005ap-Lq; Tue, 14 Sep 2021 16:37:52 -0400 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:54779) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFBe-0004bj-4L; Tue, 14 Sep 2021 16:37:52 -0400 Received: from compute2.internal (compute2.nyi.internal [10.202.2.42]) by mailout.west.internal (Postfix) with ESMTP id 9788E3200A15; Tue, 14 Sep 2021 16:37:46 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute2.internal (MEProxy); Tue, 14 Sep 2021 16:37:47 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=jS0TztUqc8lxA UMNMR3ce9+w+AK5Si6XSJP5mgj6HXg=; b=AIa2/WGENm17d+p3mL2darB5IALGN KnbI1GAtP1LAoKxQCY6jF43hNtkAJXXoryLAhD9a8HxLmR2x+8XCDackfLS7DNmz T76jAMGG1xCYB3IQyuKtk9SCIyXAu+FLfBSJ+yiQ/LqLlDtz+uuLlTJ3w1UnZ9+2 +xmOjL5rQDyXHZpKiwt41SUGY84mmByw0Vq0rWyKyDYh+3k2wV1whNUKm2iGqV+R Ujpcxhp4t7Eekt6uDh4KGOfSPVuhBlFMfHawprBX2gRn/qOriVpx2H/j5vJenGfe 6C2FuZzCOwqWfrvmVv3LvJPp7hMajO5/d1fePuZPs2H2oL+AJz5bv7hLQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=jS0TztUqc8lxAUMNMR3ce9+w+AK5Si6XSJP5mgj6HXg=; b=EsPLpzxH kFLDyNE/BVdnmF8Nn2Oy+4J3AfIrmuJbhBTakmMNH+kF7gG7WQXf8ljDkkHDmi1A UO96z890v5ZxmfqLQQMbZqQrTkBuBuu7QpeLK94i11b2WwG1W7l8V8eymxQj6PnB 8cNuLZcDrvnSnxE6bO/lIxb6hqaiMDv/CtbKEZiRo1e+8/j3ToncZ1w1lc3e4+oS EnesQ56cJshI6JpdtYOMKDEtXOAdPn8plfWivqKGXUzsF1i4tT7fQfoJx4IG74Bw nrEazn6TPLZL6qoi3rhPrdo+3sMl62gbGbIC/49w9a28WxayxVVI8IOB9COQEIy/ oJgmpb7BD/S02Q== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudegledgudegkecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghu shculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrth htvghrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffev gfeknecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepih htshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 14 Sep 2021 16:37:44 -0400 (EDT) From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH RFC 01/13] hw/nvme: move dif/pi prototypes into dif.h Date: Tue, 14 Sep 2021 22:37:25 +0200 Message-Id: <20210914203737.182571-2-its@irrelevant.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210914203737.182571-1-its@irrelevant.dk> References: <20210914203737.182571-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.19; envelope-from=its@irrelevant.dk; helo=wout3-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , qemu-block@nongnu.org, Klaus Jensen , Markus Armbruster , Klaus Jensen , Hanna Reitz , Stefan Hajnoczi , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 1 + hw/nvme/dif.c | 1 + hw/nvme/dif.h | 53 ++++++++++++++++++++++++++++++++++++++++++++++++++ hw/nvme/nvme.h | 50 ----------------------------------------------- 4 files changed, 55 insertions(+), 50 deletions(-) create mode 100644 hw/nvme/dif.h diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index dc0e7b00308e..65970b81d5fb 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -163,6 +163,7 @@ #include "migration/vmstate.h" #include "nvme.h" +#include "dif.h" #include "trace.h" #define NVME_MAX_IOQPAIRS 0xffff diff --git a/hw/nvme/dif.c b/hw/nvme/dif.c index 5dbd18b2a4a5..cd0cea2b5ebd 100644 --- a/hw/nvme/dif.c +++ b/hw/nvme/dif.c @@ -13,6 +13,7 @@ #include "sysemu/block-backend.h" #include "nvme.h" +#include "dif.h" #include "trace.h" uint16_t nvme_check_prinfo(NvmeNamespace *ns, uint8_t prinfo, uint64_t slba, diff --git a/hw/nvme/dif.h b/hw/nvme/dif.h new file mode 100644 index 000000000000..e36fea30e71e --- /dev/null +++ b/hw/nvme/dif.h @@ -0,0 +1,53 @@ +#ifndef HW_NVME_DIF_H +#define HW_NVME_DIF_H + +/* from Linux kernel (crypto/crct10dif_common.c) */ +static const uint16_t t10_dif_crc_table[256] = { + 0x0000, 0x8BB7, 0x9CD9, 0x176E, 0xB205, 0x39B2, 0x2EDC, 0xA56B, + 0xEFBD, 0x640A, 0x7364, 0xF8D3, 0x5DB8, 0xD60F, 0xC161, 0x4AD6, + 0x54CD, 0xDF7A, 0xC814, 0x43A3, 0xE6C8, 0x6D7F, 0x7A11, 0xF1A6, + 0xBB70, 0x30C7, 0x27A9, 0xAC1E, 0x0975, 0x82C2, 0x95AC, 0x1E1B, + 0xA99A, 0x222D, 0x3543, 0xBEF4, 0x1B9F, 0x9028, 0x8746, 0x0CF1, + 0x4627, 0xCD90, 0xDAFE, 0x5149, 0xF422, 0x7F95, 0x68FB, 0xE34C, + 0xFD57, 0x76E0, 0x618E, 0xEA39, 0x4F52, 0xC4E5, 0xD38B, 0x583C, + 0x12EA, 0x995D, 0x8E33, 0x0584, 0xA0EF, 0x2B58, 0x3C36, 0xB781, + 0xD883, 0x5334, 0x445A, 0xCFED, 0x6A86, 0xE131, 0xF65F, 0x7DE8, + 0x373E, 0xBC89, 0xABE7, 0x2050, 0x853B, 0x0E8C, 0x19E2, 0x9255, + 0x8C4E, 0x07F9, 0x1097, 0x9B20, 0x3E4B, 0xB5FC, 0xA292, 0x2925, + 0x63F3, 0xE844, 0xFF2A, 0x749D, 0xD1F6, 0x5A41, 0x4D2F, 0xC698, + 0x7119, 0xFAAE, 0xEDC0, 0x6677, 0xC31C, 0x48AB, 0x5FC5, 0xD472, + 0x9EA4, 0x1513, 0x027D, 0x89CA, 0x2CA1, 0xA716, 0xB078, 0x3BCF, + 0x25D4, 0xAE63, 0xB90D, 0x32BA, 0x97D1, 0x1C66, 0x0B08, 0x80BF, + 0xCA69, 0x41DE, 0x56B0, 0xDD07, 0x786C, 0xF3DB, 0xE4B5, 0x6F02, + 0x3AB1, 0xB106, 0xA668, 0x2DDF, 0x88B4, 0x0303, 0x146D, 0x9FDA, + 0xD50C, 0x5EBB, 0x49D5, 0xC262, 0x6709, 0xECBE, 0xFBD0, 0x7067, + 0x6E7C, 0xE5CB, 0xF2A5, 0x7912, 0xDC79, 0x57CE, 0x40A0, 0xCB17, + 0x81C1, 0x0A76, 0x1D18, 0x96AF, 0x33C4, 0xB873, 0xAF1D, 0x24AA, + 0x932B, 0x189C, 0x0FF2, 0x8445, 0x212E, 0xAA99, 0xBDF7, 0x3640, + 0x7C96, 0xF721, 0xE04F, 0x6BF8, 0xCE93, 0x4524, 0x524A, 0xD9FD, + 0xC7E6, 0x4C51, 0x5B3F, 0xD088, 0x75E3, 0xFE54, 0xE93A, 0x628D, + 0x285B, 0xA3EC, 0xB482, 0x3F35, 0x9A5E, 0x11E9, 0x0687, 0x8D30, + 0xE232, 0x6985, 0x7EEB, 0xF55C, 0x5037, 0xDB80, 0xCCEE, 0x4759, + 0x0D8F, 0x8638, 0x9156, 0x1AE1, 0xBF8A, 0x343D, 0x2353, 0xA8E4, + 0xB6FF, 0x3D48, 0x2A26, 0xA191, 0x04FA, 0x8F4D, 0x9823, 0x1394, + 0x5942, 0xD2F5, 0xC59B, 0x4E2C, 0xEB47, 0x60F0, 0x779E, 0xFC29, + 0x4BA8, 0xC01F, 0xD771, 0x5CC6, 0xF9AD, 0x721A, 0x6574, 0xEEC3, + 0xA415, 0x2FA2, 0x38CC, 0xB37B, 0x1610, 0x9DA7, 0x8AC9, 0x017E, + 0x1F65, 0x94D2, 0x83BC, 0x080B, 0xAD60, 0x26D7, 0x31B9, 0xBA0E, + 0xF0D8, 0x7B6F, 0x6C01, 0xE7B6, 0x42DD, 0xC96A, 0xDE04, 0x55B3 +}; + +uint16_t nvme_check_prinfo(NvmeNamespace *ns, uint8_t prinfo, uint64_t slba, + uint32_t reftag); +uint16_t nvme_dif_mangle_mdata(NvmeNamespace *ns, uint8_t *mbuf, size_t mlen, + uint64_t slba); +void nvme_dif_pract_generate_dif(NvmeNamespace *ns, uint8_t *buf, size_t len, + uint8_t *mbuf, size_t mlen, uint16_t apptag, + uint32_t *reftag); +uint16_t nvme_dif_check(NvmeNamespace *ns, uint8_t *buf, size_t len, + uint8_t *mbuf, size_t mlen, uint8_t prinfo, + uint64_t slba, uint16_t apptag, + uint16_t appmask, uint32_t *reftag); +uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req); + +#endif /* HW_NVME_DIF_H */ diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index 83ffabade4cf..45bf96d65321 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -503,54 +503,4 @@ void nvme_rw_complete_cb(void *opaque, int ret); uint16_t nvme_map_dptr(NvmeCtrl *n, NvmeSg *sg, size_t len, NvmeCmd *cmd); -/* from Linux kernel (crypto/crct10dif_common.c) */ -static const uint16_t t10_dif_crc_table[256] = { - 0x0000, 0x8BB7, 0x9CD9, 0x176E, 0xB205, 0x39B2, 0x2EDC, 0xA56B, - 0xEFBD, 0x640A, 0x7364, 0xF8D3, 0x5DB8, 0xD60F, 0xC161, 0x4AD6, - 0x54CD, 0xDF7A, 0xC814, 0x43A3, 0xE6C8, 0x6D7F, 0x7A11, 0xF1A6, - 0xBB70, 0x30C7, 0x27A9, 0xAC1E, 0x0975, 0x82C2, 0x95AC, 0x1E1B, - 0xA99A, 0x222D, 0x3543, 0xBEF4, 0x1B9F, 0x9028, 0x8746, 0x0CF1, - 0x4627, 0xCD90, 0xDAFE, 0x5149, 0xF422, 0x7F95, 0x68FB, 0xE34C, - 0xFD57, 0x76E0, 0x618E, 0xEA39, 0x4F52, 0xC4E5, 0xD38B, 0x583C, - 0x12EA, 0x995D, 0x8E33, 0x0584, 0xA0EF, 0x2B58, 0x3C36, 0xB781, - 0xD883, 0x5334, 0x445A, 0xCFED, 0x6A86, 0xE131, 0xF65F, 0x7DE8, - 0x373E, 0xBC89, 0xABE7, 0x2050, 0x853B, 0x0E8C, 0x19E2, 0x9255, - 0x8C4E, 0x07F9, 0x1097, 0x9B20, 0x3E4B, 0xB5FC, 0xA292, 0x2925, - 0x63F3, 0xE844, 0xFF2A, 0x749D, 0xD1F6, 0x5A41, 0x4D2F, 0xC698, - 0x7119, 0xFAAE, 0xEDC0, 0x6677, 0xC31C, 0x48AB, 0x5FC5, 0xD472, - 0x9EA4, 0x1513, 0x027D, 0x89CA, 0x2CA1, 0xA716, 0xB078, 0x3BCF, - 0x25D4, 0xAE63, 0xB90D, 0x32BA, 0x97D1, 0x1C66, 0x0B08, 0x80BF, - 0xCA69, 0x41DE, 0x56B0, 0xDD07, 0x786C, 0xF3DB, 0xE4B5, 0x6F02, - 0x3AB1, 0xB106, 0xA668, 0x2DDF, 0x88B4, 0x0303, 0x146D, 0x9FDA, - 0xD50C, 0x5EBB, 0x49D5, 0xC262, 0x6709, 0xECBE, 0xFBD0, 0x7067, - 0x6E7C, 0xE5CB, 0xF2A5, 0x7912, 0xDC79, 0x57CE, 0x40A0, 0xCB17, - 0x81C1, 0x0A76, 0x1D18, 0x96AF, 0x33C4, 0xB873, 0xAF1D, 0x24AA, - 0x932B, 0x189C, 0x0FF2, 0x8445, 0x212E, 0xAA99, 0xBDF7, 0x3640, - 0x7C96, 0xF721, 0xE04F, 0x6BF8, 0xCE93, 0x4524, 0x524A, 0xD9FD, - 0xC7E6, 0x4C51, 0x5B3F, 0xD088, 0x75E3, 0xFE54, 0xE93A, 0x628D, - 0x285B, 0xA3EC, 0xB482, 0x3F35, 0x9A5E, 0x11E9, 0x0687, 0x8D30, - 0xE232, 0x6985, 0x7EEB, 0xF55C, 0x5037, 0xDB80, 0xCCEE, 0x4759, - 0x0D8F, 0x8638, 0x9156, 0x1AE1, 0xBF8A, 0x343D, 0x2353, 0xA8E4, - 0xB6FF, 0x3D48, 0x2A26, 0xA191, 0x04FA, 0x8F4D, 0x9823, 0x1394, - 0x5942, 0xD2F5, 0xC59B, 0x4E2C, 0xEB47, 0x60F0, 0x779E, 0xFC29, - 0x4BA8, 0xC01F, 0xD771, 0x5CC6, 0xF9AD, 0x721A, 0x6574, 0xEEC3, - 0xA415, 0x2FA2, 0x38CC, 0xB37B, 0x1610, 0x9DA7, 0x8AC9, 0x017E, - 0x1F65, 0x94D2, 0x83BC, 0x080B, 0xAD60, 0x26D7, 0x31B9, 0xBA0E, - 0xF0D8, 0x7B6F, 0x6C01, 0xE7B6, 0x42DD, 0xC96A, 0xDE04, 0x55B3 -}; - -uint16_t nvme_check_prinfo(NvmeNamespace *ns, uint8_t prinfo, uint64_t slba, - uint32_t reftag); -uint16_t nvme_dif_mangle_mdata(NvmeNamespace *ns, uint8_t *mbuf, size_t mlen, - uint64_t slba); -void nvme_dif_pract_generate_dif(NvmeNamespace *ns, uint8_t *buf, size_t len, - uint8_t *mbuf, size_t mlen, uint16_t apptag, - uint32_t *reftag); -uint16_t nvme_dif_check(NvmeNamespace *ns, uint8_t *buf, size_t len, - uint8_t *mbuf, size_t mlen, uint8_t prinfo, - uint64_t slba, uint16_t apptag, - uint16_t appmask, uint32_t *reftag); -uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req); - - #endif /* HW_NVME_INTERNAL_H */ From patchwork Tue Sep 14 20:37:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 1528131 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=irrelevant.dk header.i=@irrelevant.dk header.a=rsa-sha256 header.s=fm1 header.b=kh/k/XeI; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=qI9p/AOG; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4H8Fn81Jcwz9sXS for ; Wed, 15 Sep 2021 06:48:43 +1000 (AEST) Received: from localhost ([::1]:35864 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mQFM9-00044I-BC for incoming@patchwork.ozlabs.org; Tue, 14 Sep 2021 16:48:41 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54922) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFBi-0005aq-Jw; Tue, 14 Sep 2021 16:37:58 -0400 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:42553) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFBf-0004f0-PB; Tue, 14 Sep 2021 16:37:54 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.west.internal (Postfix) with ESMTP id 7841332009FF; Tue, 14 Sep 2021 16:37:49 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Tue, 14 Sep 2021 16:37:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=vCZFRCB4DWYpc 3gKxY6hWrxsyDcFN8gQDpTEuaiDAmE=; b=kh/k/XeIVZpmEZci/+iuFwiSVQAJY mFOMf+trrEmBmMusr4h5V1qWMJbmUCbRQubmcqQ3dys3jL4rFe2A+nHczrk1e3bD rHvSo6UgzVU3igDzk2HBXkdZ9i1ke10VhoIUlb4evxlV07OblJ76cO0EorSWBBFV ux9OqNnQH8+EfmyzwODjmgbU+6ImDQ7fsqybwAJei0wXTxTVBoFgmZ2l893TC04C cjM+mbHxJYNjx/O+zobtz8tFnrgJoPKgEsTK2thNs7Rp7gxWEbe8raIqKRCujmfH iYFxgS0IxEpcBvLhYy3QTCb6KIhYLTIha7SdjueZnVAT9wPkXOGAdw6Eg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=vCZFRCB4DWYpc3gKxY6hWrxsyDcFN8gQDpTEuaiDAmE=; b=qI9p/AOG JxTQNqBjCv81AUAk/m8vWY9IlJWuId0DUGbCFiPgnA2F6MP5CzkZpwC8AlvaWL3C 8uUqZZ3bMuBJ/jA/qbyX0fg6STCk4NvUX14fXRtk0129nigYE3E6iQ9FYA95qviR Z2go2pSefXKqLXlnH1VOJ0rTlKoHv2NO8awcArNg1y0oP9YWg+a4vcXVGhvtnFdt ru+r1HydOnJZapq2kp0pqA8seqID6EmeA38QBgkol55SbvgOYYo+xyJMgxin2bR9 3ghZyHjnzPtR+hSkVfEciMZe0SVf5smffz5x0YrSRkuDPeOuNefceuE/4qLjvViq ljscFF4zLTjXbg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudegledgudegjecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghu shculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrth htvghrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffev gfeknecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepih htshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 14 Sep 2021 16:37:47 -0400 (EDT) From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH RFC 02/13] hw/nvme: move zns helpers and types into zoned.h Date: Tue, 14 Sep 2021 22:37:26 +0200 Message-Id: <20210914203737.182571-3-its@irrelevant.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210914203737.182571-1-its@irrelevant.dk> References: <20210914203737.182571-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.19; envelope-from=its@irrelevant.dk; helo=wout3-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , qemu-block@nongnu.org, Klaus Jensen , Markus Armbruster , Klaus Jensen , Hanna Reitz , Stefan Hajnoczi , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen Move ZNS related helpers and types into zoned.h. Use a common prefix (nvme_zoned or nvme_ns_zoned) for zns related functions. Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 92 ++++++++++++++++++++-------------------------- hw/nvme/ns.c | 39 ++++++++++---------- hw/nvme/nvme.h | 72 ------------------------------------ hw/nvme/zoned.h | 97 +++++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 156 insertions(+), 144 deletions(-) create mode 100644 hw/nvme/zoned.h diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index 65970b81d5fb..778a2689481d 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -164,6 +164,8 @@ #include "nvme.h" #include "dif.h" +#include "zoned.h" + #include "trace.h" #define NVME_MAX_IOQPAIRS 0xffff @@ -262,7 +264,7 @@ static void nvme_assign_zone_state(NvmeNamespace *ns, NvmeZone *zone, NvmeZoneState state) { if (QTAILQ_IN_USE(zone, entry)) { - switch (nvme_get_zone_state(zone)) { + switch (nvme_zoned_zs(zone)) { case NVME_ZONE_STATE_EXPLICITLY_OPEN: QTAILQ_REMOVE(&ns->exp_open_zones, zone, entry); break; @@ -279,7 +281,7 @@ static void nvme_assign_zone_state(NvmeNamespace *ns, NvmeZone *zone, } } - nvme_set_zone_state(zone, state); + nvme_zoned_set_zs(zone, state); switch (state) { case NVME_ZONE_STATE_EXPLICITLY_OPEN: @@ -304,7 +306,8 @@ static void nvme_assign_zone_state(NvmeNamespace *ns, NvmeZone *zone, * Check if we can open a zone without exceeding open/active limits. * AOR stands for "Active and Open Resources" (see TP 4053 section 2.5). */ -static int nvme_aor_check(NvmeNamespace *ns, uint32_t act, uint32_t opn) +static int nvme_ns_zoned_aor_check(NvmeNamespace *ns, uint32_t act, + uint32_t opn) { if (ns->params.max_active_zones != 0 && ns->nr_active_zones + act > ns->params.max_active_zones) { @@ -1552,28 +1555,11 @@ static void nvme_aio_err(NvmeRequest *req, int ret) req->status = status; } -static inline uint32_t nvme_zone_idx(NvmeNamespace *ns, uint64_t slba) -{ - return ns->zone_size_log2 > 0 ? slba >> ns->zone_size_log2 : - slba / ns->zone_size; -} - -static inline NvmeZone *nvme_get_zone_by_slba(NvmeNamespace *ns, uint64_t slba) -{ - uint32_t zone_idx = nvme_zone_idx(ns, slba); - - if (zone_idx >= ns->num_zones) { - return NULL; - } - - return &ns->zone_array[zone_idx]; -} - static uint16_t nvme_check_zone_state_for_write(NvmeZone *zone) { uint64_t zslba = zone->d.zslba; - switch (nvme_get_zone_state(zone)) { + switch (nvme_zoned_zs(zone)) { case NVME_ZONE_STATE_EMPTY: case NVME_ZONE_STATE_IMPLICITLY_OPEN: case NVME_ZONE_STATE_EXPLICITLY_OPEN: @@ -1598,7 +1584,7 @@ static uint16_t nvme_check_zone_state_for_write(NvmeZone *zone) static uint16_t nvme_check_zone_write(NvmeNamespace *ns, NvmeZone *zone, uint64_t slba, uint32_t nlb) { - uint64_t zcap = nvme_zone_wr_boundary(zone); + uint64_t zcap = nvme_zoned_zone_wr_boundary(zone); uint16_t status; status = nvme_check_zone_state_for_write(zone); @@ -1621,7 +1607,7 @@ static uint16_t nvme_check_zone_write(NvmeNamespace *ns, NvmeZone *zone, static uint16_t nvme_check_zone_state_for_read(NvmeZone *zone) { - switch (nvme_get_zone_state(zone)) { + switch (nvme_zoned_zs(zone)) { case NVME_ZONE_STATE_EMPTY: case NVME_ZONE_STATE_IMPLICITLY_OPEN: case NVME_ZONE_STATE_EXPLICITLY_OPEN: @@ -1646,10 +1632,10 @@ static uint16_t nvme_check_zone_read(NvmeNamespace *ns, uint64_t slba, uint64_t bndry, end; uint16_t status; - zone = nvme_get_zone_by_slba(ns, slba); + zone = nvme_ns_zoned_get_by_slba(ns, slba); assert(zone); - bndry = nvme_zone_rd_boundary(ns, zone); + bndry = nvme_zoned_zone_rd_boundary(ns, zone); end = slba + nlb; status = nvme_check_zone_state_for_read(zone); @@ -1669,7 +1655,7 @@ static uint16_t nvme_check_zone_read(NvmeNamespace *ns, uint64_t slba, if (status) { break; } - } while (end > nvme_zone_rd_boundary(ns, zone)); + } while (end > nvme_zoned_zone_rd_boundary(ns, zone)); } } @@ -1678,16 +1664,16 @@ static uint16_t nvme_check_zone_read(NvmeNamespace *ns, uint64_t slba, static uint16_t nvme_zrm_finish(NvmeNamespace *ns, NvmeZone *zone) { - switch (nvme_get_zone_state(zone)) { + switch (nvme_zoned_zs(zone)) { case NVME_ZONE_STATE_FULL: return NVME_SUCCESS; case NVME_ZONE_STATE_IMPLICITLY_OPEN: case NVME_ZONE_STATE_EXPLICITLY_OPEN: - nvme_aor_dec_open(ns); + nvme_ns_zoned_aor_dec_open(ns); /* fallthrough */ case NVME_ZONE_STATE_CLOSED: - nvme_aor_dec_active(ns); + nvme_ns_zoned_aor_dec_active(ns); /* fallthrough */ case NVME_ZONE_STATE_EMPTY: nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_FULL); @@ -1700,10 +1686,10 @@ static uint16_t nvme_zrm_finish(NvmeNamespace *ns, NvmeZone *zone) static uint16_t nvme_zrm_close(NvmeNamespace *ns, NvmeZone *zone) { - switch (nvme_get_zone_state(zone)) { + switch (nvme_zoned_zs(zone)) { case NVME_ZONE_STATE_EXPLICITLY_OPEN: case NVME_ZONE_STATE_IMPLICITLY_OPEN: - nvme_aor_dec_open(ns); + nvme_ns_zoned_aor_dec_open(ns); nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_CLOSED); /* fall through */ case NVME_ZONE_STATE_CLOSED: @@ -1716,13 +1702,13 @@ static uint16_t nvme_zrm_close(NvmeNamespace *ns, NvmeZone *zone) static uint16_t nvme_zrm_reset(NvmeNamespace *ns, NvmeZone *zone) { - switch (nvme_get_zone_state(zone)) { + switch (nvme_zoned_zs(zone)) { case NVME_ZONE_STATE_EXPLICITLY_OPEN: case NVME_ZONE_STATE_IMPLICITLY_OPEN: - nvme_aor_dec_open(ns); + nvme_ns_zoned_aor_dec_open(ns); /* fallthrough */ case NVME_ZONE_STATE_CLOSED: - nvme_aor_dec_active(ns); + nvme_ns_zoned_aor_dec_active(ns); /* fallthrough */ case NVME_ZONE_STATE_FULL: zone->w_ptr = zone->d.zslba; @@ -1764,7 +1750,7 @@ static uint16_t nvme_zrm_open_flags(NvmeCtrl *n, NvmeNamespace *ns, int act = 0; uint16_t status; - switch (nvme_get_zone_state(zone)) { + switch (nvme_zoned_zs(zone)) { case NVME_ZONE_STATE_EMPTY: act = 1; @@ -1774,16 +1760,16 @@ static uint16_t nvme_zrm_open_flags(NvmeCtrl *n, NvmeNamespace *ns, if (n->params.auto_transition_zones) { nvme_zrm_auto_transition_zone(ns); } - status = nvme_aor_check(ns, act, 1); + status = nvme_ns_zoned_aor_check(ns, act, 1); if (status) { return status; } if (act) { - nvme_aor_inc_active(ns); + nvme_ns_zoned_aor_inc_active(ns); } - nvme_aor_inc_open(ns); + nvme_ns_zoned_aor_inc_open(ns); if (flags & NVME_ZRM_AUTO) { nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_IMPLICITLY_OPEN); @@ -1826,7 +1812,7 @@ static void nvme_advance_zone_wp(NvmeNamespace *ns, NvmeZone *zone, { zone->d.wp += nlb; - if (zone->d.wp == nvme_zone_wr_boundary(zone)) { + if (zone->d.wp == nvme_zoned_zone_wr_boundary(zone)) { nvme_zrm_finish(ns, zone); } } @@ -1840,7 +1826,7 @@ static void nvme_finalize_zoned_write(NvmeNamespace *ns, NvmeRequest *req) slba = le64_to_cpu(rw->slba); nlb = le16_to_cpu(rw->nlb) + 1; - zone = nvme_get_zone_by_slba(ns, slba); + zone = nvme_ns_zoned_get_by_slba(ns, slba); assert(zone); nvme_advance_zone_wp(ns, zone, nlb); @@ -2821,7 +2807,7 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *req) iocb->slba = le64_to_cpu(copy->sdlba); if (ns->params.zoned) { - iocb->zone = nvme_get_zone_by_slba(ns, iocb->slba); + iocb->zone = nvme_ns_zoned_get_by_slba(ns, iocb->slba); if (!iocb->zone) { status = NVME_LBA_RANGE | NVME_DNR; goto invalid; @@ -3176,7 +3162,7 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeRequest *req, bool append, } if (ns->params.zoned) { - zone = nvme_get_zone_by_slba(ns, slba); + zone = nvme_ns_zoned_get_by_slba(ns, slba); assert(zone); if (append) { @@ -3297,7 +3283,7 @@ static uint16_t nvme_get_mgmt_zone_slba_idx(NvmeNamespace *ns, NvmeCmd *c, return NVME_LBA_RANGE | NVME_DNR; } - *zone_idx = nvme_zone_idx(ns, *slba); + *zone_idx = nvme_ns_zoned_zidx(ns, *slba); assert(*zone_idx < ns->num_zones); return NVME_SUCCESS; @@ -3349,14 +3335,14 @@ static uint16_t nvme_offline_zone(NvmeNamespace *ns, NvmeZone *zone, static uint16_t nvme_set_zd_ext(NvmeNamespace *ns, NvmeZone *zone) { uint16_t status; - uint8_t state = nvme_get_zone_state(zone); + uint8_t state = nvme_zoned_zs(zone); if (state == NVME_ZONE_STATE_EMPTY) { - status = nvme_aor_check(ns, 1, 0); + status = nvme_ns_zoned_aor_check(ns, 1, 0); if (status) { return status; } - nvme_aor_inc_active(ns); + nvme_ns_zoned_aor_inc_active(ns); zone->d.za |= NVME_ZA_ZD_EXT_VALID; nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_CLOSED); return NVME_SUCCESS; @@ -3370,7 +3356,7 @@ static uint16_t nvme_bulk_proc_zone(NvmeNamespace *ns, NvmeZone *zone, op_handler_t op_hndlr, NvmeRequest *req) { uint16_t status = NVME_SUCCESS; - NvmeZoneState zs = nvme_get_zone_state(zone); + NvmeZoneState zs = nvme_zoned_zs(zone); bool proc_zone; switch (zs) { @@ -3407,7 +3393,7 @@ static uint16_t nvme_do_zone_op(NvmeNamespace *ns, NvmeZone *zone, int i; if (!proc_mask) { - status = op_hndlr(ns, zone, nvme_get_zone_state(zone), req); + status = op_hndlr(ns, zone, nvme_zoned_zs(zone), req); } else { if (proc_mask & NVME_PROC_CLOSED_ZONES) { QTAILQ_FOREACH_SAFE(zone, &ns->closed_zones, entry, next) { @@ -3555,7 +3541,7 @@ static void nvme_zone_reset_cb(void *opaque, int ret) while (iocb->idx < ns->num_zones) { NvmeZone *zone = &ns->zone_array[iocb->idx++]; - switch (nvme_get_zone_state(zone)) { + switch (nvme_zoned_zs(zone)) { case NVME_ZONE_STATE_EMPTY: if (!iocb->all) { goto done; @@ -3682,7 +3668,7 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeRequest *req) if (all || !ns->params.zd_extension_size) { return NVME_INVALID_FIELD | NVME_DNR; } - zd_ext = nvme_get_zd_extension(ns, zone_idx); + zd_ext = nvme_ns_zoned_zde(ns, zone_idx); status = nvme_h2c(n, zd_ext, ns->params.zd_extension_size, req); if (status) { trace_pci_nvme_err_zd_extension_map_error(zone_idx); @@ -3714,7 +3700,7 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeRequest *req) static bool nvme_zone_matches_filter(uint32_t zafs, NvmeZone *zl) { - NvmeZoneState zs = nvme_get_zone_state(zl); + NvmeZoneState zs = nvme_zoned_zs(zl); switch (zafs) { case NVME_ZONE_REPORT_ALL: @@ -3820,7 +3806,7 @@ static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, NvmeRequest *req) z->zslba = cpu_to_le64(zone->d.zslba); z->za = zone->d.za; - if (nvme_wp_is_valid(zone)) { + if (nvme_zoned_wp_valid(zone)) { z->wp = cpu_to_le64(zone->d.wp); } else { z->wp = cpu_to_le64(~0ULL); @@ -3828,7 +3814,7 @@ static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, NvmeRequest *req) if (zra == NVME_ZONE_REPORT_EXTENDED) { if (zone->d.za & NVME_ZA_ZD_EXT_VALID) { - memcpy(buf_p, nvme_get_zd_extension(ns, zone_idx), + memcpy(buf_p, nvme_ns_zoned_zde(ns, zone_idx), ns->params.zd_extension_size); } buf_p += ns->params.zd_extension_size; diff --git a/hw/nvme/ns.c b/hw/nvme/ns.c index b7cf1494e75b..8cdcaec99880 100644 --- a/hw/nvme/ns.c +++ b/hw/nvme/ns.c @@ -20,10 +20,11 @@ #include "sysemu/block-backend.h" #include "nvme.h" +#include "zoned.h" + #include "trace.h" #define MIN_DISCARD_GRANULARITY (4 * KiB) -#define NVME_DEFAULT_ZONE_SIZE (128 * MiB) void nvme_ns_init_format(NvmeNamespace *ns) { @@ -238,7 +239,7 @@ static void nvme_ns_zoned_init_state(NvmeNamespace *ns) zone_size = capacity - start; } zone->d.zt = NVME_ZONE_TYPE_SEQ_WRITE; - nvme_set_zone_state(zone, NVME_ZONE_STATE_EMPTY); + nvme_zoned_set_zs(zone, NVME_ZONE_STATE_EMPTY); zone->d.za = 0; zone->d.zcap = ns->zone_capacity; zone->d.zslba = start; @@ -253,7 +254,7 @@ static void nvme_ns_zoned_init_state(NvmeNamespace *ns) } } -static void nvme_ns_init_zoned(NvmeNamespace *ns) +static void nvme_ns_zoned_init(NvmeNamespace *ns) { NvmeIdNsZoned *id_ns_z; int i; @@ -298,49 +299,49 @@ static void nvme_ns_init_zoned(NvmeNamespace *ns) ns->id_ns_zoned = id_ns_z; } -static void nvme_clear_zone(NvmeNamespace *ns, NvmeZone *zone) +static void nvme_ns_zoned_clear_zone(NvmeNamespace *ns, NvmeZone *zone) { uint8_t state; zone->w_ptr = zone->d.wp; - state = nvme_get_zone_state(zone); + state = nvme_zoned_zs(zone); if (zone->d.wp != zone->d.zslba || (zone->d.za & NVME_ZA_ZD_EXT_VALID)) { if (state != NVME_ZONE_STATE_CLOSED) { trace_pci_nvme_clear_ns_close(state, zone->d.zslba); - nvme_set_zone_state(zone, NVME_ZONE_STATE_CLOSED); + nvme_zoned_set_zs(zone, NVME_ZONE_STATE_CLOSED); } - nvme_aor_inc_active(ns); + nvme_ns_zoned_aor_inc_active(ns); QTAILQ_INSERT_HEAD(&ns->closed_zones, zone, entry); } else { trace_pci_nvme_clear_ns_reset(state, zone->d.zslba); - nvme_set_zone_state(zone, NVME_ZONE_STATE_EMPTY); + nvme_zoned_set_zs(zone, NVME_ZONE_STATE_EMPTY); } } /* * Close all the zones that are currently open. */ -static void nvme_zoned_ns_shutdown(NvmeNamespace *ns) +static void nvme_ns_zoned_shutdown(NvmeNamespace *ns) { NvmeZone *zone, *next; QTAILQ_FOREACH_SAFE(zone, &ns->closed_zones, entry, next) { QTAILQ_REMOVE(&ns->closed_zones, zone, entry); - nvme_aor_dec_active(ns); - nvme_clear_zone(ns, zone); + nvme_ns_zoned_aor_dec_active(ns); + nvme_ns_zoned_clear_zone(ns, zone); } QTAILQ_FOREACH_SAFE(zone, &ns->imp_open_zones, entry, next) { QTAILQ_REMOVE(&ns->imp_open_zones, zone, entry); - nvme_aor_dec_open(ns); - nvme_aor_dec_active(ns); - nvme_clear_zone(ns, zone); + nvme_ns_zoned_aor_dec_open(ns); + nvme_ns_zoned_aor_dec_active(ns); + nvme_ns_zoned_clear_zone(ns, zone); } QTAILQ_FOREACH_SAFE(zone, &ns->exp_open_zones, entry, next) { QTAILQ_REMOVE(&ns->exp_open_zones, zone, entry); - nvme_aor_dec_open(ns); - nvme_aor_dec_active(ns); - nvme_clear_zone(ns, zone); + nvme_ns_zoned_aor_dec_open(ns); + nvme_ns_zoned_aor_dec_active(ns); + nvme_ns_zoned_clear_zone(ns, zone); } assert(ns->nr_open_zones == 0); @@ -413,7 +414,7 @@ int nvme_ns_setup(NvmeNamespace *ns, Error **errp) if (nvme_ns_zoned_check_calc_geometry(ns, errp) != 0) { return -1; } - nvme_ns_init_zoned(ns); + nvme_ns_zoned_init(ns); } return 0; @@ -428,7 +429,7 @@ void nvme_ns_shutdown(NvmeNamespace *ns) { blk_flush(ns->blkconf.blk); if (ns->params.zoned) { - nvme_zoned_ns_shutdown(ns); + nvme_ns_zoned_shutdown(ns); } } diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index 45bf96d65321..99d8b9066cc9 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -182,78 +182,6 @@ static inline bool nvme_ns_ext(NvmeNamespace *ns) return !!NVME_ID_NS_FLBAS_EXTENDED(ns->id_ns.flbas); } -static inline NvmeZoneState nvme_get_zone_state(NvmeZone *zone) -{ - return zone->d.zs >> 4; -} - -static inline void nvme_set_zone_state(NvmeZone *zone, NvmeZoneState state) -{ - zone->d.zs = state << 4; -} - -static inline uint64_t nvme_zone_rd_boundary(NvmeNamespace *ns, NvmeZone *zone) -{ - return zone->d.zslba + ns->zone_size; -} - -static inline uint64_t nvme_zone_wr_boundary(NvmeZone *zone) -{ - return zone->d.zslba + zone->d.zcap; -} - -static inline bool nvme_wp_is_valid(NvmeZone *zone) -{ - uint8_t st = nvme_get_zone_state(zone); - - return st != NVME_ZONE_STATE_FULL && - st != NVME_ZONE_STATE_READ_ONLY && - st != NVME_ZONE_STATE_OFFLINE; -} - -static inline uint8_t *nvme_get_zd_extension(NvmeNamespace *ns, - uint32_t zone_idx) -{ - return &ns->zd_extensions[zone_idx * ns->params.zd_extension_size]; -} - -static inline void nvme_aor_inc_open(NvmeNamespace *ns) -{ - assert(ns->nr_open_zones >= 0); - if (ns->params.max_open_zones) { - ns->nr_open_zones++; - assert(ns->nr_open_zones <= ns->params.max_open_zones); - } -} - -static inline void nvme_aor_dec_open(NvmeNamespace *ns) -{ - if (ns->params.max_open_zones) { - assert(ns->nr_open_zones > 0); - ns->nr_open_zones--; - } - assert(ns->nr_open_zones >= 0); -} - -static inline void nvme_aor_inc_active(NvmeNamespace *ns) -{ - assert(ns->nr_active_zones >= 0); - if (ns->params.max_active_zones) { - ns->nr_active_zones++; - assert(ns->nr_active_zones <= ns->params.max_active_zones); - } -} - -static inline void nvme_aor_dec_active(NvmeNamespace *ns) -{ - if (ns->params.max_active_zones) { - assert(ns->nr_active_zones > 0); - ns->nr_active_zones--; - assert(ns->nr_active_zones >= ns->nr_open_zones); - } - assert(ns->nr_active_zones >= 0); -} - void nvme_ns_init_format(NvmeNamespace *ns); int nvme_ns_setup(NvmeNamespace *ns, Error **errp); void nvme_ns_drain(NvmeNamespace *ns); diff --git a/hw/nvme/zoned.h b/hw/nvme/zoned.h new file mode 100644 index 000000000000..e98b282cb615 --- /dev/null +++ b/hw/nvme/zoned.h @@ -0,0 +1,97 @@ +#ifndef HW_NVME_ZONED_H +#define HW_NVME_ZONED_H + +#include "qemu/units.h" + +#include "nvme.h" + +#define NVME_DEFAULT_ZONE_SIZE (128 * MiB) + +static inline NvmeZoneState nvme_zoned_zs(NvmeZone *zone) +{ + return zone->d.zs >> 4; +} + +static inline void nvme_zoned_set_zs(NvmeZone *zone, NvmeZoneState state) +{ + zone->d.zs = state << 4; +} + +static inline uint64_t nvme_zoned_zone_rd_boundary(NvmeNamespace *ns, + NvmeZone *zone) +{ + return zone->d.zslba + ns->zone_size; +} + +static inline uint64_t nvme_zoned_zone_wr_boundary(NvmeZone *zone) +{ + return zone->d.zslba + zone->d.zcap; +} + +static inline bool nvme_zoned_wp_valid(NvmeZone *zone) +{ + uint8_t st = nvme_zoned_zs(zone); + + return st != NVME_ZONE_STATE_FULL && + st != NVME_ZONE_STATE_READ_ONLY && + st != NVME_ZONE_STATE_OFFLINE; +} + +static inline uint32_t nvme_ns_zoned_zidx(NvmeNamespace *ns, uint64_t slba) +{ + return ns->zone_size_log2 > 0 ? slba >> ns->zone_size_log2 : + slba / ns->zone_size; +} + +static inline NvmeZone *nvme_ns_zoned_get_by_slba(NvmeNamespace *ns, uint64_t slba) +{ + uint32_t zone_idx = nvme_ns_zoned_zidx(ns, slba); + + assert(zone_idx < ns->num_zones); + return &ns->zone_array[zone_idx]; +} + +static inline uint8_t *nvme_ns_zoned_zde(NvmeNamespace *ns, uint32_t zone_idx) +{ + return &ns->zd_extensions[zone_idx * ns->params.zd_extension_size]; +} + +static inline void nvme_ns_zoned_aor_inc_open(NvmeNamespace *ns) +{ + assert(ns->nr_open_zones >= 0); + if (ns->params.max_open_zones) { + ns->nr_open_zones++; + assert(ns->nr_open_zones <= ns->params.max_open_zones); + } +} + +static inline void nvme_ns_zoned_aor_dec_open(NvmeNamespace *ns) +{ + if (ns->params.max_open_zones) { + assert(ns->nr_open_zones > 0); + ns->nr_open_zones--; + } + assert(ns->nr_open_zones >= 0); +} + +static inline void nvme_ns_zoned_aor_inc_active(NvmeNamespace *ns) +{ + assert(ns->nr_active_zones >= 0); + if (ns->params.max_active_zones) { + ns->nr_active_zones++; + assert(ns->nr_active_zones <= ns->params.max_active_zones); + } +} + +static inline void nvme_ns_zoned_aor_dec_active(NvmeNamespace *ns) +{ + if (ns->params.max_active_zones) { + assert(ns->nr_active_zones > 0); + ns->nr_active_zones--; + assert(ns->nr_active_zones >= ns->nr_open_zones); + } + assert(ns->nr_active_zones >= 0); +} + + +#endif /* HW_NVME_ZONED_H */ From patchwork Tue Sep 14 20:37:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 1528132 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=irrelevant.dk header.i=@irrelevant.dk header.a=rsa-sha256 header.s=fm1 header.b=paChx/uw; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=ayrVMEYF; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4H8Fn80sv1z9sVq for ; Wed, 15 Sep 2021 06:48:43 +1000 (AEST) Received: from localhost ([::1]:35816 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mQFM8-00041Q-IL for incoming@patchwork.ozlabs.org; Tue, 14 Sep 2021 16:48:40 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54964) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFBq-0005cD-VV; Tue, 14 Sep 2021 16:38:03 -0400 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:51319) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFBi-0004hO-HC; Tue, 14 Sep 2021 16:38:02 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.west.internal (Postfix) with ESMTP id 7819432009E9; Tue, 14 Sep 2021 16:37:52 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Tue, 14 Sep 2021 16:37:53 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=zVs+nIMaUBztE XI1v+kDNQEGe+FkvovkVxRVx1KTR7c=; b=paChx/uwj9/2QhGt2uo91yp7WL+v3 pCiS7aAdnJ8Tjs6i3ZwXhf+Zl0zI4Y+wUrHt8rvH+nYl3B96DmmH+2h1MkOwy1Pc zoxLRhS0ECYZJt36+LkefKx7YCCQDdZ4F2Q4WwVYluGC4ohfOYHSkhrjZOS78Ucx xrJnYEiJvZLtoFWkAwyxQkgyGoTn1knyldnAChqEL9XDiXRr8iDiNKuOMmBGivC9 Qbbd/vOn+gor4H6rn6Yt7X1OWFdzWh6X88dSo8GOw5m4hjRrn7YMv/hOUwXwrQde 5MuXvWgYBaG3tjKi3wAY9ox5zOIXfiRHbuV/K4tWtAiAqjEubkCcltpNw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=zVs+nIMaUBztEXI1v+kDNQEGe+FkvovkVxRVx1KTR7c=; b=ayrVMEYF Pyj6RwIiQKpnsc73+ExVCJEkPxowhQlBT0t5cJ0ouiGkReYQM6jXSSX7PGeub15E QTpgwPKbRDoKNwfV1J5z+SmW1crxTO6yEFBrqzxLlzJZAQQXXjk7CYov/f7pqy9O RQCMIyejDkl2o2T3SryAvU/DCR1Fz0uWclD0nb/Z/TkppgnKb4kal/RWRd0KIBlL GF7lMk/0qbzbwJP2RPNMHMrGATbrO23M9vhyJPngt0Rc9IsUVx5ph3pnzI3+a8bf cA16FGkoikcs2AfzFYQp0UEWt2yx0pJ9Vw9EEZ3OQma7ccRlhRdUqbXRhYJacxXf mqrLnUQjQlLAeg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudegledgudegjecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghu shculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrth htvghrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffev gfeknecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepih htshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 14 Sep 2021 16:37:50 -0400 (EDT) From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH RFC 03/13] hw/nvme: move zoned namespace members to separate struct Date: Tue, 14 Sep 2021 22:37:27 +0200 Message-Id: <20210914203737.182571-4-its@irrelevant.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210914203737.182571-1-its@irrelevant.dk> References: <20210914203737.182571-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.19; envelope-from=its@irrelevant.dk; helo=wout3-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , qemu-block@nongnu.org, Klaus Jensen , Markus Armbruster , Klaus Jensen , Hanna Reitz , Stefan Hajnoczi , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen In preparation for nvm and zoned namespace separation, move zoned related members from NvmeNamespace into NvmeNamespaceZoned. There are no functional changes here, basically just a s/NvmeNamespace/NvmeNamespaceZoned and s/ns/zoned where applicable. Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 276 +++++++++++++++++++++++-------------------- hw/nvme/ns.c | 134 +++++++++++---------- hw/nvme/nvme.h | 66 +++++++---- hw/nvme/zoned.h | 68 +++++------ include/block/nvme.h | 4 + 5 files changed, 304 insertions(+), 244 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index 778a2689481d..4c30823d389f 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -260,22 +260,22 @@ static uint16_t nvme_sqid(NvmeRequest *req) return le16_to_cpu(req->sq->sqid); } -static void nvme_assign_zone_state(NvmeNamespace *ns, NvmeZone *zone, +static void nvme_assign_zone_state(NvmeNamespaceZoned *zoned, NvmeZone *zone, NvmeZoneState state) { if (QTAILQ_IN_USE(zone, entry)) { switch (nvme_zoned_zs(zone)) { case NVME_ZONE_STATE_EXPLICITLY_OPEN: - QTAILQ_REMOVE(&ns->exp_open_zones, zone, entry); + QTAILQ_REMOVE(&zoned->exp_open_zones, zone, entry); break; case NVME_ZONE_STATE_IMPLICITLY_OPEN: - QTAILQ_REMOVE(&ns->imp_open_zones, zone, entry); + QTAILQ_REMOVE(&zoned->imp_open_zones, zone, entry); break; case NVME_ZONE_STATE_CLOSED: - QTAILQ_REMOVE(&ns->closed_zones, zone, entry); + QTAILQ_REMOVE(&zoned->closed_zones, zone, entry); break; case NVME_ZONE_STATE_FULL: - QTAILQ_REMOVE(&ns->full_zones, zone, entry); + QTAILQ_REMOVE(&zoned->full_zones, zone, entry); default: ; } @@ -285,16 +285,16 @@ static void nvme_assign_zone_state(NvmeNamespace *ns, NvmeZone *zone, switch (state) { case NVME_ZONE_STATE_EXPLICITLY_OPEN: - QTAILQ_INSERT_TAIL(&ns->exp_open_zones, zone, entry); + QTAILQ_INSERT_TAIL(&zoned->exp_open_zones, zone, entry); break; case NVME_ZONE_STATE_IMPLICITLY_OPEN: - QTAILQ_INSERT_TAIL(&ns->imp_open_zones, zone, entry); + QTAILQ_INSERT_TAIL(&zoned->imp_open_zones, zone, entry); break; case NVME_ZONE_STATE_CLOSED: - QTAILQ_INSERT_TAIL(&ns->closed_zones, zone, entry); + QTAILQ_INSERT_TAIL(&zoned->closed_zones, zone, entry); break; case NVME_ZONE_STATE_FULL: - QTAILQ_INSERT_TAIL(&ns->full_zones, zone, entry); + QTAILQ_INSERT_TAIL(&zoned->full_zones, zone, entry); case NVME_ZONE_STATE_READ_ONLY: break; default: @@ -306,17 +306,17 @@ static void nvme_assign_zone_state(NvmeNamespace *ns, NvmeZone *zone, * Check if we can open a zone without exceeding open/active limits. * AOR stands for "Active and Open Resources" (see TP 4053 section 2.5). */ -static int nvme_ns_zoned_aor_check(NvmeNamespace *ns, uint32_t act, +static int nvme_ns_zoned_aor_check(NvmeNamespaceZoned *zoned, uint32_t act, uint32_t opn) { - if (ns->params.max_active_zones != 0 && - ns->nr_active_zones + act > ns->params.max_active_zones) { - trace_pci_nvme_err_insuff_active_res(ns->params.max_active_zones); + if (zoned->max_active_zones != 0 && + zoned->nr_active_zones + act > zoned->max_active_zones) { + trace_pci_nvme_err_insuff_active_res(zoned->max_active_zones); return NVME_ZONE_TOO_MANY_ACTIVE | NVME_DNR; } - if (ns->params.max_open_zones != 0 && - ns->nr_open_zones + opn > ns->params.max_open_zones) { - trace_pci_nvme_err_insuff_open_res(ns->params.max_open_zones); + if (zoned->max_open_zones != 0 && + zoned->nr_open_zones + opn > zoned->max_open_zones) { + trace_pci_nvme_err_insuff_open_res(zoned->max_open_zones); return NVME_ZONE_TOO_MANY_OPEN | NVME_DNR; } @@ -1581,8 +1581,8 @@ static uint16_t nvme_check_zone_state_for_write(NvmeZone *zone) return NVME_INTERNAL_DEV_ERROR; } -static uint16_t nvme_check_zone_write(NvmeNamespace *ns, NvmeZone *zone, - uint64_t slba, uint32_t nlb) +static uint16_t nvme_check_zone_write(NvmeZone *zone, uint64_t slba, + uint32_t nlb) { uint64_t zcap = nvme_zoned_zone_wr_boundary(zone); uint16_t status; @@ -1625,24 +1625,24 @@ static uint16_t nvme_check_zone_state_for_read(NvmeZone *zone) return NVME_INTERNAL_DEV_ERROR; } -static uint16_t nvme_check_zone_read(NvmeNamespace *ns, uint64_t slba, +static uint16_t nvme_check_zone_read(NvmeNamespaceZoned *zoned, uint64_t slba, uint32_t nlb) { NvmeZone *zone; uint64_t bndry, end; uint16_t status; - zone = nvme_ns_zoned_get_by_slba(ns, slba); + zone = nvme_ns_zoned_get_by_slba(zoned, slba); assert(zone); - bndry = nvme_zoned_zone_rd_boundary(ns, zone); + bndry = nvme_zoned_zone_rd_boundary(zoned, zone); end = slba + nlb; status = nvme_check_zone_state_for_read(zone); if (status) { ; } else if (unlikely(end > bndry)) { - if (!ns->params.cross_zone_read) { + if (!(zoned->flags & NVME_NS_ZONED_CROSS_READ)) { status = NVME_ZONE_BOUNDARY_ERROR; } else { /* @@ -1655,14 +1655,14 @@ static uint16_t nvme_check_zone_read(NvmeNamespace *ns, uint64_t slba, if (status) { break; } - } while (end > nvme_zoned_zone_rd_boundary(ns, zone)); + } while (end > nvme_zoned_zone_rd_boundary(zoned, zone)); } } return status; } -static uint16_t nvme_zrm_finish(NvmeNamespace *ns, NvmeZone *zone) +static uint16_t nvme_zrm_finish(NvmeNamespaceZoned *zoned, NvmeZone *zone) { switch (nvme_zoned_zs(zone)) { case NVME_ZONE_STATE_FULL: @@ -1670,13 +1670,13 @@ static uint16_t nvme_zrm_finish(NvmeNamespace *ns, NvmeZone *zone) case NVME_ZONE_STATE_IMPLICITLY_OPEN: case NVME_ZONE_STATE_EXPLICITLY_OPEN: - nvme_ns_zoned_aor_dec_open(ns); + nvme_ns_zoned_aor_dec_open(zoned); /* fallthrough */ case NVME_ZONE_STATE_CLOSED: - nvme_ns_zoned_aor_dec_active(ns); + nvme_ns_zoned_aor_dec_active(zoned); /* fallthrough */ case NVME_ZONE_STATE_EMPTY: - nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_FULL); + nvme_assign_zone_state(zoned, zone, NVME_ZONE_STATE_FULL); return NVME_SUCCESS; default: @@ -1684,13 +1684,13 @@ static uint16_t nvme_zrm_finish(NvmeNamespace *ns, NvmeZone *zone) } } -static uint16_t nvme_zrm_close(NvmeNamespace *ns, NvmeZone *zone) +static uint16_t nvme_zrm_close(NvmeNamespaceZoned *zoned, NvmeZone *zone) { switch (nvme_zoned_zs(zone)) { case NVME_ZONE_STATE_EXPLICITLY_OPEN: case NVME_ZONE_STATE_IMPLICITLY_OPEN: - nvme_ns_zoned_aor_dec_open(ns); - nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_CLOSED); + nvme_ns_zoned_aor_dec_open(zoned); + nvme_assign_zone_state(zoned, zone, NVME_ZONE_STATE_CLOSED); /* fall through */ case NVME_ZONE_STATE_CLOSED: return NVME_SUCCESS; @@ -1700,20 +1700,20 @@ static uint16_t nvme_zrm_close(NvmeNamespace *ns, NvmeZone *zone) } } -static uint16_t nvme_zrm_reset(NvmeNamespace *ns, NvmeZone *zone) +static uint16_t nvme_zrm_reset(NvmeNamespaceZoned *zoned, NvmeZone *zone) { switch (nvme_zoned_zs(zone)) { case NVME_ZONE_STATE_EXPLICITLY_OPEN: case NVME_ZONE_STATE_IMPLICITLY_OPEN: - nvme_ns_zoned_aor_dec_open(ns); + nvme_ns_zoned_aor_dec_open(zoned); /* fallthrough */ case NVME_ZONE_STATE_CLOSED: - nvme_ns_zoned_aor_dec_active(ns); + nvme_ns_zoned_aor_dec_active(zoned); /* fallthrough */ case NVME_ZONE_STATE_FULL: zone->w_ptr = zone->d.zslba; zone->d.wp = zone->w_ptr; - nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_EMPTY); + nvme_assign_zone_state(zoned, zone, NVME_ZONE_STATE_EMPTY); /* fallthrough */ case NVME_ZONE_STATE_EMPTY: return NVME_SUCCESS; @@ -1723,19 +1723,19 @@ static uint16_t nvme_zrm_reset(NvmeNamespace *ns, NvmeZone *zone) } } -static void nvme_zrm_auto_transition_zone(NvmeNamespace *ns) +static void nvme_zrm_auto_transition_zone(NvmeNamespaceZoned *zoned) { NvmeZone *zone; - if (ns->params.max_open_zones && - ns->nr_open_zones == ns->params.max_open_zones) { - zone = QTAILQ_FIRST(&ns->imp_open_zones); + if (zoned->max_open_zones && + zoned->nr_open_zones == zoned->max_open_zones) { + zone = QTAILQ_FIRST(&zoned->imp_open_zones); if (zone) { /* * Automatically close this implicitly open zone. */ - QTAILQ_REMOVE(&ns->imp_open_zones, zone, entry); - nvme_zrm_close(ns, zone); + QTAILQ_REMOVE(&zoned->imp_open_zones, zone, entry); + nvme_zrm_close(zoned, zone); } } } @@ -1744,7 +1744,7 @@ enum { NVME_ZRM_AUTO = 1 << 0, }; -static uint16_t nvme_zrm_open_flags(NvmeCtrl *n, NvmeNamespace *ns, +static uint16_t nvme_zrm_open_flags(NvmeCtrl *n, NvmeNamespaceZoned *zoned, NvmeZone *zone, int flags) { int act = 0; @@ -1758,21 +1758,21 @@ static uint16_t nvme_zrm_open_flags(NvmeCtrl *n, NvmeNamespace *ns, case NVME_ZONE_STATE_CLOSED: if (n->params.auto_transition_zones) { - nvme_zrm_auto_transition_zone(ns); + nvme_zrm_auto_transition_zone(zoned); } - status = nvme_ns_zoned_aor_check(ns, act, 1); + status = nvme_ns_zoned_aor_check(zoned, act, 1); if (status) { return status; } if (act) { - nvme_ns_zoned_aor_inc_active(ns); + nvme_ns_zoned_aor_inc_active(zoned); } - nvme_ns_zoned_aor_inc_open(ns); + nvme_ns_zoned_aor_inc_open(zoned); if (flags & NVME_ZRM_AUTO) { - nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_IMPLICITLY_OPEN); + nvme_assign_zone_state(zoned, zone, NVME_ZONE_STATE_IMPLICITLY_OPEN); return NVME_SUCCESS; } @@ -1783,7 +1783,7 @@ static uint16_t nvme_zrm_open_flags(NvmeCtrl *n, NvmeNamespace *ns, return NVME_SUCCESS; } - nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_EXPLICITLY_OPEN); + nvme_assign_zone_state(zoned, zone, NVME_ZONE_STATE_EXPLICITLY_OPEN); /* fallthrough */ @@ -1795,29 +1795,30 @@ static uint16_t nvme_zrm_open_flags(NvmeCtrl *n, NvmeNamespace *ns, } } -static inline uint16_t nvme_zrm_auto(NvmeCtrl *n, NvmeNamespace *ns, +static inline uint16_t nvme_zrm_auto(NvmeCtrl *n, NvmeNamespaceZoned *zoned, NvmeZone *zone) { - return nvme_zrm_open_flags(n, ns, zone, NVME_ZRM_AUTO); + return nvme_zrm_open_flags(n, zoned, zone, NVME_ZRM_AUTO); } -static inline uint16_t nvme_zrm_open(NvmeCtrl *n, NvmeNamespace *ns, +static inline uint16_t nvme_zrm_open(NvmeCtrl *n, NvmeNamespaceZoned *zoned, NvmeZone *zone) { - return nvme_zrm_open_flags(n, ns, zone, 0); + return nvme_zrm_open_flags(n, zoned, zone, 0); } -static void nvme_advance_zone_wp(NvmeNamespace *ns, NvmeZone *zone, +static void nvme_advance_zone_wp(NvmeNamespaceZoned *zoned, NvmeZone *zone, uint32_t nlb) { zone->d.wp += nlb; if (zone->d.wp == nvme_zoned_zone_wr_boundary(zone)) { - nvme_zrm_finish(ns, zone); + nvme_zrm_finish(zoned, zone); } } -static void nvme_finalize_zoned_write(NvmeNamespace *ns, NvmeRequest *req) +static void nvme_finalize_zoned_write(NvmeNamespaceZoned *zoned, + NvmeRequest *req) { NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; NvmeZone *zone; @@ -1826,10 +1827,10 @@ static void nvme_finalize_zoned_write(NvmeNamespace *ns, NvmeRequest *req) slba = le64_to_cpu(rw->slba); nlb = le16_to_cpu(rw->nlb) + 1; - zone = nvme_ns_zoned_get_by_slba(ns, slba); + zone = nvme_ns_zoned_get_by_slba(zoned, slba); assert(zone); - nvme_advance_zone_wp(ns, zone, nlb); + nvme_advance_zone_wp(zoned, zone, nlb); } static inline bool nvme_is_write(NvmeRequest *req) @@ -1876,8 +1877,8 @@ void nvme_rw_complete_cb(void *opaque, int ret) block_acct_done(stats, acct); } - if (ns->params.zoned && nvme_is_write(req)) { - nvme_finalize_zoned_write(ns, req); + if (nvme_ns_zoned(ns) && nvme_is_write(req)) { + nvme_finalize_zoned_write(NVME_NAMESPACE_ZONED(ns), req); } nvme_enqueue_req_completion(nvme_cq(req), req); @@ -2503,8 +2504,8 @@ static void nvme_copy_out_completed_cb(void *opaque, int ret) goto out; } - if (ns->params.zoned) { - nvme_advance_zone_wp(ns, iocb->zone, nlb); + if (nvme_ns_zoned(ns)) { + nvme_advance_zone_wp(NVME_NAMESPACE_ZONED(ns), iocb->zone, nlb); } iocb->idx++; @@ -2623,8 +2624,8 @@ static void nvme_copy_in_completed_cb(void *opaque, int ret) goto invalid; } - if (ns->params.zoned) { - status = nvme_check_zone_write(ns, iocb->zone, iocb->slba, nlb); + if (nvme_ns_zoned(ns)) { + status = nvme_check_zone_write(iocb->zone, iocb->slba, nlb); if (status) { goto invalid; } @@ -2737,8 +2738,8 @@ static void nvme_copy_cb(void *opaque, int ret) } } - if (ns->params.zoned) { - status = nvme_check_zone_read(ns, slba, nlb); + if (nvme_ns_zoned(ns)) { + status = nvme_check_zone_read(NVME_NAMESPACE_ZONED(ns), slba, nlb); if (status) { goto invalid; } @@ -2806,14 +2807,16 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *req) iocb->slba = le64_to_cpu(copy->sdlba); - if (ns->params.zoned) { - iocb->zone = nvme_ns_zoned_get_by_slba(ns, iocb->slba); + if (nvme_ns_zoned(ns)) { + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); + + iocb->zone = nvme_ns_zoned_get_by_slba(zoned, iocb->slba); if (!iocb->zone) { status = NVME_LBA_RANGE | NVME_DNR; goto invalid; } - status = nvme_zrm_auto(n, ns, iocb->zone); + status = nvme_zrm_auto(n, zoned, iocb->zone); if (status) { goto invalid; } @@ -3081,8 +3084,8 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *req) goto invalid; } - if (ns->params.zoned) { - status = nvme_check_zone_read(ns, slba, nlb); + if (nvme_ns_zoned(ns)) { + status = nvme_check_zone_read(NVME_NAMESPACE_ZONED(ns), slba, nlb); if (status) { trace_pci_nvme_err_zone_read_not_ok(slba, nlb, status); goto invalid; @@ -3161,8 +3164,10 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeRequest *req, bool append, goto invalid; } - if (ns->params.zoned) { - zone = nvme_ns_zoned_get_by_slba(ns, slba); + if (nvme_ns_zoned(ns)) { + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); + + zone = nvme_ns_zoned_get_by_slba(zoned, slba); assert(zone); if (append) { @@ -3209,12 +3214,12 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeRequest *req, bool append, } } - status = nvme_check_zone_write(ns, zone, slba, nlb); + status = nvme_check_zone_write(zone, slba, nlb); if (status) { goto invalid; } - status = nvme_zrm_auto(n, ns, zone); + status = nvme_zrm_auto(n, zoned, zone); if (status) { goto invalid; } @@ -3268,14 +3273,18 @@ static inline uint16_t nvme_zone_append(NvmeCtrl *n, NvmeRequest *req) static uint16_t nvme_get_mgmt_zone_slba_idx(NvmeNamespace *ns, NvmeCmd *c, uint64_t *slba, uint32_t *zone_idx) { + NvmeNamespaceZoned *zoned; + uint32_t dw10 = le32_to_cpu(c->cdw10); uint32_t dw11 = le32_to_cpu(c->cdw11); - if (!ns->params.zoned) { + if (!nvme_ns_zoned(ns)) { trace_pci_nvme_err_invalid_opc(c->opcode); return NVME_INVALID_OPCODE | NVME_DNR; } + zoned = NVME_NAMESPACE_ZONED(ns); + *slba = ((uint64_t)dw11) << 32 | dw10; if (unlikely(*slba >= ns->id_ns.nsze)) { trace_pci_nvme_err_invalid_lba_range(*slba, 0, ns->id_ns.nsze); @@ -3283,14 +3292,14 @@ static uint16_t nvme_get_mgmt_zone_slba_idx(NvmeNamespace *ns, NvmeCmd *c, return NVME_LBA_RANGE | NVME_DNR; } - *zone_idx = nvme_ns_zoned_zidx(ns, *slba); - assert(*zone_idx < ns->num_zones); + *zone_idx = nvme_ns_zoned_zidx(zoned, *slba); + assert(*zone_idx < zoned->num_zones); return NVME_SUCCESS; } -typedef uint16_t (*op_handler_t)(NvmeNamespace *, NvmeZone *, NvmeZoneState, - NvmeRequest *); +typedef uint16_t (*op_handler_t)(NvmeNamespaceZoned *, NvmeZone *, + NvmeZoneState, NvmeRequest *); enum NvmeZoneProcessingMask { NVME_PROC_CURRENT_ZONE = 0, @@ -3300,30 +3309,30 @@ enum NvmeZoneProcessingMask { NVME_PROC_FULL_ZONES = 1 << 3, }; -static uint16_t nvme_open_zone(NvmeNamespace *ns, NvmeZone *zone, +static uint16_t nvme_open_zone(NvmeNamespaceZoned *zoned, NvmeZone *zone, NvmeZoneState state, NvmeRequest *req) { - return nvme_zrm_open(nvme_ctrl(req), ns, zone); + return nvme_zrm_open(nvme_ctrl(req), zoned, zone); } -static uint16_t nvme_close_zone(NvmeNamespace *ns, NvmeZone *zone, +static uint16_t nvme_close_zone(NvmeNamespaceZoned *zoned, NvmeZone *zone, NvmeZoneState state, NvmeRequest *req) { - return nvme_zrm_close(ns, zone); + return nvme_zrm_close(zoned, zone); } -static uint16_t nvme_finish_zone(NvmeNamespace *ns, NvmeZone *zone, +static uint16_t nvme_finish_zone(NvmeNamespaceZoned *zoned, NvmeZone *zone, NvmeZoneState state, NvmeRequest *req) { - return nvme_zrm_finish(ns, zone); + return nvme_zrm_finish(zoned, zone); } -static uint16_t nvme_offline_zone(NvmeNamespace *ns, NvmeZone *zone, +static uint16_t nvme_offline_zone(NvmeNamespaceZoned *zoned, NvmeZone *zone, NvmeZoneState state, NvmeRequest *req) { switch (state) { case NVME_ZONE_STATE_READ_ONLY: - nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_OFFLINE); + nvme_assign_zone_state(zoned, zone, NVME_ZONE_STATE_OFFLINE); /* fall through */ case NVME_ZONE_STATE_OFFLINE: return NVME_SUCCESS; @@ -3332,26 +3341,26 @@ static uint16_t nvme_offline_zone(NvmeNamespace *ns, NvmeZone *zone, } } -static uint16_t nvme_set_zd_ext(NvmeNamespace *ns, NvmeZone *zone) +static uint16_t nvme_set_zd_ext(NvmeNamespaceZoned *zoned, NvmeZone *zone) { uint16_t status; uint8_t state = nvme_zoned_zs(zone); if (state == NVME_ZONE_STATE_EMPTY) { - status = nvme_ns_zoned_aor_check(ns, 1, 0); + status = nvme_ns_zoned_aor_check(zoned, 1, 0); if (status) { return status; } - nvme_ns_zoned_aor_inc_active(ns); + nvme_ns_zoned_aor_inc_active(zoned); zone->d.za |= NVME_ZA_ZD_EXT_VALID; - nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_CLOSED); + nvme_assign_zone_state(zoned, zone, NVME_ZONE_STATE_CLOSED); return NVME_SUCCESS; } return NVME_ZONE_INVAL_TRANSITION; } -static uint16_t nvme_bulk_proc_zone(NvmeNamespace *ns, NvmeZone *zone, +static uint16_t nvme_bulk_proc_zone(NvmeNamespaceZoned *zoned, NvmeZone *zone, enum NvmeZoneProcessingMask proc_mask, op_handler_t op_hndlr, NvmeRequest *req) { @@ -3378,13 +3387,13 @@ static uint16_t nvme_bulk_proc_zone(NvmeNamespace *ns, NvmeZone *zone, } if (proc_zone) { - status = op_hndlr(ns, zone, zs, req); + status = op_hndlr(zoned, zone, zs, req); } return status; } -static uint16_t nvme_do_zone_op(NvmeNamespace *ns, NvmeZone *zone, +static uint16_t nvme_do_zone_op(NvmeNamespaceZoned *zoned, NvmeZone *zone, enum NvmeZoneProcessingMask proc_mask, op_handler_t op_hndlr, NvmeRequest *req) { @@ -3393,11 +3402,11 @@ static uint16_t nvme_do_zone_op(NvmeNamespace *ns, NvmeZone *zone, int i; if (!proc_mask) { - status = op_hndlr(ns, zone, nvme_zoned_zs(zone), req); + status = op_hndlr(zoned, zone, nvme_zoned_zs(zone), req); } else { if (proc_mask & NVME_PROC_CLOSED_ZONES) { - QTAILQ_FOREACH_SAFE(zone, &ns->closed_zones, entry, next) { - status = nvme_bulk_proc_zone(ns, zone, proc_mask, op_hndlr, + QTAILQ_FOREACH_SAFE(zone, &zoned->closed_zones, entry, next) { + status = nvme_bulk_proc_zone(zoned, zone, proc_mask, op_hndlr, req); if (status && status != NVME_NO_COMPLETE) { goto out; @@ -3405,16 +3414,16 @@ static uint16_t nvme_do_zone_op(NvmeNamespace *ns, NvmeZone *zone, } } if (proc_mask & NVME_PROC_OPENED_ZONES) { - QTAILQ_FOREACH_SAFE(zone, &ns->imp_open_zones, entry, next) { - status = nvme_bulk_proc_zone(ns, zone, proc_mask, op_hndlr, + QTAILQ_FOREACH_SAFE(zone, &zoned->imp_open_zones, entry, next) { + status = nvme_bulk_proc_zone(zoned, zone, proc_mask, op_hndlr, req); if (status && status != NVME_NO_COMPLETE) { goto out; } } - QTAILQ_FOREACH_SAFE(zone, &ns->exp_open_zones, entry, next) { - status = nvme_bulk_proc_zone(ns, zone, proc_mask, op_hndlr, + QTAILQ_FOREACH_SAFE(zone, &zoned->exp_open_zones, entry, next) { + status = nvme_bulk_proc_zone(zoned, zone, proc_mask, op_hndlr, req); if (status && status != NVME_NO_COMPLETE) { goto out; @@ -3422,8 +3431,8 @@ static uint16_t nvme_do_zone_op(NvmeNamespace *ns, NvmeZone *zone, } } if (proc_mask & NVME_PROC_FULL_ZONES) { - QTAILQ_FOREACH_SAFE(zone, &ns->full_zones, entry, next) { - status = nvme_bulk_proc_zone(ns, zone, proc_mask, op_hndlr, + QTAILQ_FOREACH_SAFE(zone, &zoned->full_zones, entry, next) { + status = nvme_bulk_proc_zone(zoned, zone, proc_mask, op_hndlr, req); if (status && status != NVME_NO_COMPLETE) { goto out; @@ -3432,8 +3441,8 @@ static uint16_t nvme_do_zone_op(NvmeNamespace *ns, NvmeZone *zone, } if (proc_mask & NVME_PROC_READ_ONLY_ZONES) { - for (i = 0; i < ns->num_zones; i++, zone++) { - status = nvme_bulk_proc_zone(ns, zone, proc_mask, op_hndlr, + for (i = 0; i < zoned->num_zones; i++, zone++) { + status = nvme_bulk_proc_zone(zoned, zone, proc_mask, op_hndlr, req); if (status && status != NVME_NO_COMPLETE) { goto out; @@ -3464,7 +3473,7 @@ static void nvme_zone_reset_cancel(BlockAIOCB *aiocb) NvmeRequest *req = iocb->req; NvmeNamespace *ns = req->ns; - iocb->idx = ns->num_zones; + iocb->idx = NVME_NAMESPACE_ZONED(ns)->num_zones; iocb->ret = -ECANCELED; @@ -3511,7 +3520,7 @@ static void nvme_zone_reset_epilogue_cb(void *opaque, int ret) } moff = nvme_moff(ns, iocb->zone->d.zslba); - count = nvme_m2b(ns, ns->zone_size); + count = nvme_m2b(ns, NVME_NAMESPACE_ZONED(ns)->zone_size); iocb->aiocb = blk_aio_pwrite_zeroes(ns->blkconf.blk, moff, count, BDRV_REQ_MAY_UNMAP, @@ -3524,6 +3533,7 @@ static void nvme_zone_reset_cb(void *opaque, int ret) NvmeZoneResetAIOCB *iocb = opaque; NvmeRequest *req = iocb->req; NvmeNamespace *ns = req->ns; + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); if (ret < 0) { iocb->ret = ret; @@ -3531,15 +3541,15 @@ static void nvme_zone_reset_cb(void *opaque, int ret) } if (iocb->zone) { - nvme_zrm_reset(ns, iocb->zone); + nvme_zrm_reset(zoned, iocb->zone); if (!iocb->all) { goto done; } } - while (iocb->idx < ns->num_zones) { - NvmeZone *zone = &ns->zone_array[iocb->idx++]; + while (iocb->idx < zoned->num_zones) { + NvmeZone *zone = &zoned->zone_array[iocb->idx++]; switch (nvme_zoned_zs(zone)) { case NVME_ZONE_STATE_EMPTY: @@ -3564,7 +3574,7 @@ static void nvme_zone_reset_cb(void *opaque, int ret) iocb->aiocb = blk_aio_pwrite_zeroes(ns->blkconf.blk, nvme_l2b(ns, zone->d.zslba), - nvme_l2b(ns, ns->zone_size), + nvme_l2b(ns, zoned->zone_size), BDRV_REQ_MAY_UNMAP, nvme_zone_reset_epilogue_cb, iocb); @@ -3582,6 +3592,7 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeRequest *req) { NvmeCmd *cmd = (NvmeCmd *)&req->cmd; NvmeNamespace *ns = req->ns; + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); NvmeZone *zone; NvmeZoneResetAIOCB *iocb; uint8_t *zd_ext; @@ -3605,7 +3616,7 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeRequest *req) } } - zone = &ns->zone_array[zone_idx]; + zone = &zoned->zone_array[zone_idx]; if (slba != zone->d.zslba) { trace_pci_nvme_err_unaligned_zone_cmd(action, slba, zone->d.zslba); return NVME_INVALID_FIELD | NVME_DNR; @@ -3618,7 +3629,7 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeRequest *req) proc_mask = NVME_PROC_CLOSED_ZONES; } trace_pci_nvme_open_zone(slba, zone_idx, all); - status = nvme_do_zone_op(ns, zone, proc_mask, nvme_open_zone, req); + status = nvme_do_zone_op(zoned, zone, proc_mask, nvme_open_zone, req); break; case NVME_ZONE_ACTION_CLOSE: @@ -3626,7 +3637,7 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeRequest *req) proc_mask = NVME_PROC_OPENED_ZONES; } trace_pci_nvme_close_zone(slba, zone_idx, all); - status = nvme_do_zone_op(ns, zone, proc_mask, nvme_close_zone, req); + status = nvme_do_zone_op(zoned, zone, proc_mask, nvme_close_zone, req); break; case NVME_ZONE_ACTION_FINISH: @@ -3634,7 +3645,8 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeRequest *req) proc_mask = NVME_PROC_OPENED_ZONES | NVME_PROC_CLOSED_ZONES; } trace_pci_nvme_finish_zone(slba, zone_idx, all); - status = nvme_do_zone_op(ns, zone, proc_mask, nvme_finish_zone, req); + status = nvme_do_zone_op(zoned, zone, proc_mask, nvme_finish_zone, + req); break; case NVME_ZONE_ACTION_RESET: @@ -3660,22 +3672,23 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeRequest *req) proc_mask = NVME_PROC_READ_ONLY_ZONES; } trace_pci_nvme_offline_zone(slba, zone_idx, all); - status = nvme_do_zone_op(ns, zone, proc_mask, nvme_offline_zone, req); + status = nvme_do_zone_op(zoned, zone, proc_mask, nvme_offline_zone, + req); break; case NVME_ZONE_ACTION_SET_ZD_EXT: trace_pci_nvme_set_descriptor_extension(slba, zone_idx); - if (all || !ns->params.zd_extension_size) { + if (all || !zoned->zd_extension_size) { return NVME_INVALID_FIELD | NVME_DNR; } - zd_ext = nvme_ns_zoned_zde(ns, zone_idx); - status = nvme_h2c(n, zd_ext, ns->params.zd_extension_size, req); + zd_ext = nvme_ns_zoned_zde(zoned, zone_idx); + status = nvme_h2c(n, zd_ext, zoned->zd_extension_size, req); if (status) { trace_pci_nvme_err_zd_extension_map_error(zone_idx); return status; } - status = nvme_set_zd_ext(ns, zone); + status = nvme_set_zd_ext(zoned, zone); if (status == NVME_SUCCESS) { trace_pci_nvme_zd_extension_set(zone_idx); return status; @@ -3728,6 +3741,7 @@ static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, NvmeRequest *req) { NvmeCmd *cmd = (NvmeCmd *)&req->cmd; NvmeNamespace *ns = req->ns; + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); /* cdw12 is zero-based number of dwords to return. Convert to bytes */ uint32_t data_size = (le32_to_cpu(cmd->cdw12) + 1) << 2; uint32_t dw13 = le32_to_cpu(cmd->cdw13); @@ -3753,7 +3767,7 @@ static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, NvmeRequest *req) if (zra != NVME_ZONE_REPORT && zra != NVME_ZONE_REPORT_EXTENDED) { return NVME_INVALID_FIELD | NVME_DNR; } - if (zra == NVME_ZONE_REPORT_EXTENDED && !ns->params.zd_extension_size) { + if (zra == NVME_ZONE_REPORT_EXTENDED && !zoned->zd_extension_size) { return NVME_INVALID_FIELD | NVME_DNR; } @@ -3775,14 +3789,14 @@ static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, NvmeRequest *req) zone_entry_sz = sizeof(NvmeZoneDescr); if (zra == NVME_ZONE_REPORT_EXTENDED) { - zone_entry_sz += ns->params.zd_extension_size; + zone_entry_sz += zoned->zd_extension_size; } max_zones = (data_size - sizeof(NvmeZoneReportHeader)) / zone_entry_sz; buf = g_malloc0(data_size); - zone = &ns->zone_array[zone_idx]; - for (i = zone_idx; i < ns->num_zones; i++) { + zone = &zoned->zone_array[zone_idx]; + for (i = zone_idx; i < zoned->num_zones; i++) { if (partial && nr_zones >= max_zones) { break; } @@ -3794,8 +3808,8 @@ static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, NvmeRequest *req) header->nr_zones = cpu_to_le64(nr_zones); buf_p = buf + sizeof(NvmeZoneReportHeader); - for (; zone_idx < ns->num_zones && max_zones > 0; zone_idx++) { - zone = &ns->zone_array[zone_idx]; + for (; zone_idx < zoned->num_zones && max_zones > 0; zone_idx++) { + zone = &zoned->zone_array[zone_idx]; if (nvme_zone_matches_filter(zrasf, zone)) { z = (NvmeZoneDescr *)buf_p; buf_p += sizeof(NvmeZoneDescr); @@ -3814,10 +3828,10 @@ static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, NvmeRequest *req) if (zra == NVME_ZONE_REPORT_EXTENDED) { if (zone->d.za & NVME_ZA_ZD_EXT_VALID) { - memcpy(buf_p, nvme_ns_zoned_zde(ns, zone_idx), - ns->params.zd_extension_size); + memcpy(buf_p, nvme_ns_zoned_zde(zoned, zone_idx), + zoned->zd_extension_size); } - buf_p += ns->params.zd_extension_size; + buf_p += zoned->zd_extension_size; } max_zones--; @@ -4538,8 +4552,8 @@ static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeRequest *req, if (c->csi == NVME_CSI_NVM) { return nvme_rpt_empty_id_struct(n, req); } else if (c->csi == NVME_CSI_ZONED && ns->csi == NVME_CSI_ZONED) { - return nvme_c2h(n, (uint8_t *)ns->id_ns_zoned, sizeof(NvmeIdNsZoned), - req); + return nvme_c2h(n, (uint8_t *)&NVME_NAMESPACE_ZONED(ns)->id_ns, + sizeof(NvmeIdNsZoned), req); } return NVME_INVALID_FIELD | NVME_DNR; @@ -5339,7 +5353,7 @@ done: static uint16_t nvme_format_check(NvmeNamespace *ns, uint8_t lbaf, uint8_t pi) { - if (ns->params.zoned) { + if (nvme_ns_zoned(ns)) { return NVME_INVALID_FORMAT | NVME_DNR; } diff --git a/hw/nvme/ns.c b/hw/nvme/ns.c index 8cdcaec99880..419e501239da 100644 --- a/hw/nvme/ns.c +++ b/hw/nvme/ns.c @@ -167,6 +167,8 @@ static int nvme_ns_init_blk(NvmeNamespace *ns, Error **errp) static int nvme_ns_zoned_check_calc_geometry(NvmeNamespace *ns, Error **errp) { + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); + uint64_t zone_size, zone_cap; /* Make sure that the values of ZNS properties are sane */ @@ -200,12 +202,12 @@ static int nvme_ns_zoned_check_calc_geometry(NvmeNamespace *ns, Error **errp) * Save the main zone geometry values to avoid * calculating them later again. */ - ns->zone_size = zone_size / ns->lbasz; - ns->zone_capacity = zone_cap / ns->lbasz; - ns->num_zones = le64_to_cpu(ns->id_ns.nsze) / ns->zone_size; + zoned->zone_size = zone_size / ns->lbasz; + zoned->zone_capacity = zone_cap / ns->lbasz; + zoned->num_zones = le64_to_cpu(ns->id_ns.nsze) / zoned->zone_size; /* Do a few more sanity checks of ZNS properties */ - if (!ns->num_zones) { + if (!zoned->num_zones) { error_setg(errp, "insufficient drive capacity, must be at least the size " "of one zone (%"PRIu64"B)", zone_size); @@ -215,68 +217,70 @@ static int nvme_ns_zoned_check_calc_geometry(NvmeNamespace *ns, Error **errp) return 0; } -static void nvme_ns_zoned_init_state(NvmeNamespace *ns) +static void nvme_ns_zoned_init_state(NvmeNamespaceZoned *zoned) { - uint64_t start = 0, zone_size = ns->zone_size; - uint64_t capacity = ns->num_zones * zone_size; + uint64_t start = 0, zone_size = zoned->zone_size; + uint64_t capacity = zoned->num_zones * zone_size; NvmeZone *zone; int i; - ns->zone_array = g_new0(NvmeZone, ns->num_zones); - if (ns->params.zd_extension_size) { - ns->zd_extensions = g_malloc0(ns->params.zd_extension_size * - ns->num_zones); + zoned->zone_array = g_new0(NvmeZone, zoned->num_zones); + if (zoned->zd_extension_size) { + zoned->zd_extensions = g_malloc0(zoned->zd_extension_size * + zoned->num_zones); } - QTAILQ_INIT(&ns->exp_open_zones); - QTAILQ_INIT(&ns->imp_open_zones); - QTAILQ_INIT(&ns->closed_zones); - QTAILQ_INIT(&ns->full_zones); + QTAILQ_INIT(&zoned->exp_open_zones); + QTAILQ_INIT(&zoned->imp_open_zones); + QTAILQ_INIT(&zoned->closed_zones); + QTAILQ_INIT(&zoned->full_zones); - zone = ns->zone_array; - for (i = 0; i < ns->num_zones; i++, zone++) { + zone = zoned->zone_array; + for (i = 0; i < zoned->num_zones; i++, zone++) { if (start + zone_size > capacity) { zone_size = capacity - start; } zone->d.zt = NVME_ZONE_TYPE_SEQ_WRITE; nvme_zoned_set_zs(zone, NVME_ZONE_STATE_EMPTY); zone->d.za = 0; - zone->d.zcap = ns->zone_capacity; + zone->d.zcap = zoned->zone_capacity; zone->d.zslba = start; zone->d.wp = start; zone->w_ptr = start; start += zone_size; } - ns->zone_size_log2 = 0; - if (is_power_of_2(ns->zone_size)) { - ns->zone_size_log2 = 63 - clz64(ns->zone_size); + zoned->zone_size_log2 = 0; + if (is_power_of_2(zoned->zone_size)) { + zoned->zone_size_log2 = 63 - clz64(zoned->zone_size); } } static void nvme_ns_zoned_init(NvmeNamespace *ns) { - NvmeIdNsZoned *id_ns_z; + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); + NvmeIdNsZoned *id_ns_z = &zoned->id_ns; int i; - nvme_ns_zoned_init_state(ns); - - id_ns_z = g_malloc0(sizeof(NvmeIdNsZoned)); + nvme_ns_zoned_init_state(zoned); /* MAR/MOR are zeroes-based, FFFFFFFFFh means no limit */ - id_ns_z->mar = cpu_to_le32(ns->params.max_active_zones - 1); - id_ns_z->mor = cpu_to_le32(ns->params.max_open_zones - 1); + id_ns_z->mar = cpu_to_le32(zoned->max_active_zones - 1); + id_ns_z->mor = cpu_to_le32(zoned->max_open_zones - 1); id_ns_z->zoc = 0; - id_ns_z->ozcs = ns->params.cross_zone_read ? 0x01 : 0x00; + + if (zoned->flags & NVME_NS_ZONED_CROSS_READ) { + id_ns_z->ozcs |= NVME_ID_NS_ZONED_OZCS_CROSS_READ; + } for (i = 0; i <= ns->id_ns.nlbaf; i++) { - id_ns_z->lbafe[i].zsze = cpu_to_le64(ns->zone_size); + id_ns_z->lbafe[i].zsze = cpu_to_le64(zoned->zone_size); id_ns_z->lbafe[i].zdes = - ns->params.zd_extension_size >> 6; /* Units of 64B */ + zoned->zd_extension_size >> 6; /* Units of 64B */ } ns->csi = NVME_CSI_ZONED; - ns->id_ns.nsze = cpu_to_le64(ns->num_zones * ns->zone_size); + ns->id_ns.nsze = cpu_to_le64(zoned->num_zones * zoned->zone_size); ns->id_ns.ncap = ns->id_ns.nsze; ns->id_ns.nuse = ns->id_ns.ncap; @@ -287,19 +291,17 @@ static void nvme_ns_zoned_init(NvmeNamespace *ns) * we can only support DULBE if the zone size is a multiple of the * calculated NPDG. */ - if (ns->zone_size % (ns->id_ns.npdg + 1)) { + if (zoned->zone_size % (ns->id_ns.npdg + 1)) { warn_report("the zone size (%"PRIu64" blocks) is not a multiple of " "the calculated deallocation granularity (%d blocks); " "DULBE support disabled", - ns->zone_size, ns->id_ns.npdg + 1); + zoned->zone_size, ns->id_ns.npdg + 1); ns->id_ns.nsfeat &= ~0x4; } - - ns->id_ns_zoned = id_ns_z; } -static void nvme_ns_zoned_clear_zone(NvmeNamespace *ns, NvmeZone *zone) +static void nvme_ns_zoned_clear_zone(NvmeNamespaceZoned *zoned, NvmeZone *zone) { uint8_t state; @@ -311,8 +313,8 @@ static void nvme_ns_zoned_clear_zone(NvmeNamespace *ns, NvmeZone *zone) trace_pci_nvme_clear_ns_close(state, zone->d.zslba); nvme_zoned_set_zs(zone, NVME_ZONE_STATE_CLOSED); } - nvme_ns_zoned_aor_inc_active(ns); - QTAILQ_INSERT_HEAD(&ns->closed_zones, zone, entry); + nvme_ns_zoned_aor_inc_active(zoned); + QTAILQ_INSERT_HEAD(&zoned->closed_zones, zone, entry); } else { trace_pci_nvme_clear_ns_reset(state, zone->d.zslba); nvme_zoned_set_zs(zone, NVME_ZONE_STATE_EMPTY); @@ -322,29 +324,29 @@ static void nvme_ns_zoned_clear_zone(NvmeNamespace *ns, NvmeZone *zone) /* * Close all the zones that are currently open. */ -static void nvme_ns_zoned_shutdown(NvmeNamespace *ns) +static void nvme_ns_zoned_shutdown(NvmeNamespaceZoned *zoned) { NvmeZone *zone, *next; - QTAILQ_FOREACH_SAFE(zone, &ns->closed_zones, entry, next) { - QTAILQ_REMOVE(&ns->closed_zones, zone, entry); - nvme_ns_zoned_aor_dec_active(ns); - nvme_ns_zoned_clear_zone(ns, zone); + QTAILQ_FOREACH_SAFE(zone, &zoned->closed_zones, entry, next) { + QTAILQ_REMOVE(&zoned->closed_zones, zone, entry); + nvme_ns_zoned_aor_dec_active(zoned); + nvme_ns_zoned_clear_zone(zoned, zone); } - QTAILQ_FOREACH_SAFE(zone, &ns->imp_open_zones, entry, next) { - QTAILQ_REMOVE(&ns->imp_open_zones, zone, entry); - nvme_ns_zoned_aor_dec_open(ns); - nvme_ns_zoned_aor_dec_active(ns); - nvme_ns_zoned_clear_zone(ns, zone); + QTAILQ_FOREACH_SAFE(zone, &zoned->imp_open_zones, entry, next) { + QTAILQ_REMOVE(&zoned->imp_open_zones, zone, entry); + nvme_ns_zoned_aor_dec_open(zoned); + nvme_ns_zoned_aor_dec_active(zoned); + nvme_ns_zoned_clear_zone(zoned, zone); } - QTAILQ_FOREACH_SAFE(zone, &ns->exp_open_zones, entry, next) { - QTAILQ_REMOVE(&ns->exp_open_zones, zone, entry); - nvme_ns_zoned_aor_dec_open(ns); - nvme_ns_zoned_aor_dec_active(ns); - nvme_ns_zoned_clear_zone(ns, zone); + QTAILQ_FOREACH_SAFE(zone, &zoned->exp_open_zones, entry, next) { + QTAILQ_REMOVE(&zoned->exp_open_zones, zone, entry); + nvme_ns_zoned_aor_dec_open(zoned); + nvme_ns_zoned_aor_dec_active(zoned); + nvme_ns_zoned_clear_zone(zoned, zone); } - assert(ns->nr_open_zones == 0); + assert(zoned->nr_open_zones == 0); } static int nvme_ns_check_constraints(NvmeNamespace *ns, Error **errp) @@ -411,9 +413,20 @@ int nvme_ns_setup(NvmeNamespace *ns, Error **errp) return -1; } if (ns->params.zoned) { + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); + if (nvme_ns_zoned_check_calc_geometry(ns, errp) != 0) { return -1; } + + /* copy device parameters */ + zoned->zd_extension_size = ns->params.zd_extension_size; + zoned->max_open_zones = ns->params.max_open_zones; + zoned->max_active_zones = ns->params.max_open_zones; + if (ns->params.cross_zone_read) { + zoned->flags |= NVME_NS_ZONED_CROSS_READ; + } + nvme_ns_zoned_init(ns); } @@ -428,17 +441,18 @@ void nvme_ns_drain(NvmeNamespace *ns) void nvme_ns_shutdown(NvmeNamespace *ns) { blk_flush(ns->blkconf.blk); - if (ns->params.zoned) { - nvme_ns_zoned_shutdown(ns); + if (nvme_ns_zoned(ns)) { + nvme_ns_zoned_shutdown(NVME_NAMESPACE_ZONED(ns)); } } void nvme_ns_cleanup(NvmeNamespace *ns) { - if (ns->params.zoned) { - g_free(ns->id_ns_zoned); - g_free(ns->zone_array); - g_free(ns->zd_extensions); + if (nvme_ns_zoned(ns)) { + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); + + g_free(zoned->zone_array); + g_free(zoned->zd_extensions); } } diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index 99d8b9066cc9..9cfb172101a9 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -84,12 +84,6 @@ static inline NvmeNamespace *nvme_subsys_ns(NvmeSubsystem *subsys, #define NVME_NS(obj) \ OBJECT_CHECK(NvmeNamespace, (obj), TYPE_NVME_NS) -typedef struct NvmeZone { - NvmeZoneDescr d; - uint64_t w_ptr; - QTAILQ_ENTRY(NvmeZone) entry; -} NvmeZone; - typedef struct NvmeNamespaceParams { bool detached; bool shared; @@ -116,6 +110,43 @@ typedef struct NvmeNamespaceParams { uint32_t zd_extension_size; } NvmeNamespaceParams; +typedef struct NvmeZone { + NvmeZoneDescr d; + uint64_t w_ptr; + QTAILQ_ENTRY(NvmeZone) entry; +} NvmeZone; + +enum { + NVME_NS_ZONED_CROSS_READ = 1 << 0, +}; + +typedef struct NvmeNamespaceZoned { + NvmeIdNsZoned id_ns; + + uint32_t num_zones; + NvmeZone *zone_array; + + uint64_t zone_size; + uint32_t zone_size_log2; + + uint64_t zone_capacity; + + uint32_t zd_extension_size; + uint8_t *zd_extensions; + + uint32_t max_open_zones; + int32_t nr_open_zones; + uint32_t max_active_zones; + int32_t nr_active_zones; + + unsigned long flags; + + QTAILQ_HEAD(, NvmeZone) exp_open_zones; + QTAILQ_HEAD(, NvmeZone) imp_open_zones; + QTAILQ_HEAD(, NvmeZone) closed_zones; + QTAILQ_HEAD(, NvmeZone) full_zones; +} NvmeNamespaceZoned; + typedef struct NvmeNamespace { DeviceState parent_obj; BlockConf blkconf; @@ -132,27 +163,17 @@ typedef struct NvmeNamespace { QTAILQ_ENTRY(NvmeNamespace) entry; - NvmeIdNsZoned *id_ns_zoned; - NvmeZone *zone_array; - QTAILQ_HEAD(, NvmeZone) exp_open_zones; - QTAILQ_HEAD(, NvmeZone) imp_open_zones; - QTAILQ_HEAD(, NvmeZone) closed_zones; - QTAILQ_HEAD(, NvmeZone) full_zones; - uint32_t num_zones; - uint64_t zone_size; - uint64_t zone_capacity; - uint32_t zone_size_log2; - uint8_t *zd_extensions; - int32_t nr_open_zones; - int32_t nr_active_zones; - NvmeNamespaceParams params; struct { uint32_t err_rec; } features; + + NvmeNamespaceZoned zoned; } NvmeNamespace; +#define NVME_NAMESPACE_ZONED(ns) (&(ns)->zoned) + static inline uint32_t nvme_nsid(NvmeNamespace *ns) { if (ns) { @@ -188,6 +209,11 @@ void nvme_ns_drain(NvmeNamespace *ns); void nvme_ns_shutdown(NvmeNamespace *ns); void nvme_ns_cleanup(NvmeNamespace *ns); +static inline bool nvme_ns_zoned(NvmeNamespace *ns) +{ + return ns->csi == NVME_CSI_ZONED; +} + typedef struct NvmeAsyncEvent { QTAILQ_ENTRY(NvmeAsyncEvent) entry; NvmeAerResult result; diff --git a/hw/nvme/zoned.h b/hw/nvme/zoned.h index e98b282cb615..277d70aee5c7 100644 --- a/hw/nvme/zoned.h +++ b/hw/nvme/zoned.h @@ -17,10 +17,10 @@ static inline void nvme_zoned_set_zs(NvmeZone *zone, NvmeZoneState state) zone->d.zs = state << 4; } -static inline uint64_t nvme_zoned_zone_rd_boundary(NvmeNamespace *ns, +static inline uint64_t nvme_zoned_zone_rd_boundary(NvmeNamespaceZoned *zoned, NvmeZone *zone) { - return zone->d.zslba + ns->zone_size; + return zone->d.zslba + zoned->zone_size; } static inline uint64_t nvme_zoned_zone_wr_boundary(NvmeZone *zone) @@ -37,61 +37,63 @@ static inline bool nvme_zoned_wp_valid(NvmeZone *zone) st != NVME_ZONE_STATE_OFFLINE; } -static inline uint32_t nvme_ns_zoned_zidx(NvmeNamespace *ns, uint64_t slba) +static inline uint32_t nvme_ns_zoned_zidx(NvmeNamespaceZoned *zoned, + uint64_t slba) { - return ns->zone_size_log2 > 0 ? slba >> ns->zone_size_log2 : - slba / ns->zone_size; + return zoned->zone_size_log2 > 0 ? + slba >> zoned->zone_size_log2 : slba / zoned->zone_size; } -static inline NvmeZone *nvme_ns_zoned_get_by_slba(NvmeNamespace *ns, uint64_t slba) +static inline NvmeZone *nvme_ns_zoned_get_by_slba(NvmeNamespaceZoned *zoned, + uint64_t slba) { - uint32_t zone_idx = nvme_ns_zoned_zidx(ns, slba); + uint32_t zone_idx = nvme_ns_zoned_zidx(zoned, slba); - assert(zone_idx < ns->num_zones); - return &ns->zone_array[zone_idx]; + assert(zone_idx < zoned->num_zones); + return &zoned->zone_array[zone_idx]; } -static inline uint8_t *nvme_ns_zoned_zde(NvmeNamespace *ns, uint32_t zone_idx) +static inline uint8_t *nvme_ns_zoned_zde(NvmeNamespaceZoned *zoned, + uint32_t zone_idx) { - return &ns->zd_extensions[zone_idx * ns->params.zd_extension_size]; + return &zoned->zd_extensions[zone_idx * zoned->zd_extension_size]; } -static inline void nvme_ns_zoned_aor_inc_open(NvmeNamespace *ns) +static inline void nvme_ns_zoned_aor_inc_open(NvmeNamespaceZoned *zoned) { - assert(ns->nr_open_zones >= 0); - if (ns->params.max_open_zones) { - ns->nr_open_zones++; - assert(ns->nr_open_zones <= ns->params.max_open_zones); + assert(zoned->nr_open_zones >= 0); + if (zoned->max_open_zones) { + zoned->nr_open_zones++; + assert(zoned->nr_open_zones <= zoned->max_open_zones); } } -static inline void nvme_ns_zoned_aor_dec_open(NvmeNamespace *ns) +static inline void nvme_ns_zoned_aor_dec_open(NvmeNamespaceZoned *zoned) { - if (ns->params.max_open_zones) { - assert(ns->nr_open_zones > 0); - ns->nr_open_zones--; + if (zoned->max_open_zones) { + assert(zoned->nr_open_zones > 0); + zoned->nr_open_zones--; } - assert(ns->nr_open_zones >= 0); + assert(zoned->nr_open_zones >= 0); } -static inline void nvme_ns_zoned_aor_inc_active(NvmeNamespace *ns) +static inline void nvme_ns_zoned_aor_inc_active(NvmeNamespaceZoned *zoned) { - assert(ns->nr_active_zones >= 0); - if (ns->params.max_active_zones) { - ns->nr_active_zones++; - assert(ns->nr_active_zones <= ns->params.max_active_zones); + assert(zoned->nr_active_zones >= 0); + if (zoned->max_active_zones) { + zoned->nr_active_zones++; + assert(zoned->nr_active_zones <= zoned->max_active_zones); } } -static inline void nvme_ns_zoned_aor_dec_active(NvmeNamespace *ns) +static inline void nvme_ns_zoned_aor_dec_active(NvmeNamespaceZoned *zoned) { - if (ns->params.max_active_zones) { - assert(ns->nr_active_zones > 0); - ns->nr_active_zones--; - assert(ns->nr_active_zones >= ns->nr_open_zones); + if (zoned->max_active_zones) { + assert(zoned->nr_active_zones > 0); + zoned->nr_active_zones--; + assert(zoned->nr_active_zones >= zoned->nr_open_zones); } - assert(ns->nr_active_zones >= 0); + assert(zoned->nr_active_zones >= 0); } - #endif /* HW_NVME_ZONED_H */ diff --git a/include/block/nvme.h b/include/block/nvme.h index e3bd47bf76ab..2bcabe561589 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -1338,6 +1338,10 @@ enum NvmeCsi { #define NVME_SET_CSI(vec, csi) (vec |= (uint8_t)(1 << (csi))) +enum NvmeIdNsZonedOzcs { + NVME_ID_NS_ZONED_OZCS_CROSS_READ = 1 << 0, +}; + typedef struct QEMU_PACKED NvmeIdNsZoned { uint16_t zoc; uint16_t ozcs; From patchwork Tue Sep 14 20:37:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 1528129 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=irrelevant.dk header.i=@irrelevant.dk header.a=rsa-sha256 header.s=fm1 header.b=RCEAUeFI; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=sSbgjbaT; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4H8Fcx5g59z9sSn for ; Wed, 15 Sep 2021 06:41:37 +1000 (AEST) Received: from localhost ([::1]:55826 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mQFFH-0006Qh-4z for incoming@patchwork.ozlabs.org; Tue, 14 Sep 2021 16:41:35 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54988) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFBs-0005er-Rr; Tue, 14 Sep 2021 16:38:09 -0400 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:52043) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFBm-0004k0-Ob; Tue, 14 Sep 2021 16:38:04 -0400 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailout.west.internal (Postfix) with ESMTP id 27DE032009E0; Tue, 14 Sep 2021 16:37:56 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Tue, 14 Sep 2021 16:37:56 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=xBzgFB3wciwJS A350tEW9Xdhl/8XMQIg6jmVihu/RQo=; b=RCEAUeFIKkGGDVIziYTmQz6HSf1dW 47dyzn0mdjO+iklxTKmCaxqC6f+AJVx7egLCVngdL6XBV09JC7Tk0/j3PGSUX972 INLEb+TTWny2b1fJOE4vDe1TRQR1+QCwjg6yX+XrLXUBqOQZoGHAh6NZu9tuXACC 834jAoA2AoPIZiAEa5fTOnUTgf5tdHP+ZN6r9CIrc6tWr9aXs26oVslIGzLKs/OW DSDszMOikH9Qyaacif27eJX4SgOs9xPv1o9LDgCmbz17NZHR6LvwE2oZDpkYoA6N sw1Jsn2YbTlvZE0wnEa+vgQB/NIfxSvfL+0vs7QuLg1gN9M1xIGo1MZHQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=xBzgFB3wciwJSA350tEW9Xdhl/8XMQIg6jmVihu/RQo=; b=sSbgjbaT 8WO31rdy+AVpSfTexR/rr26Azadhs2l38Vceb67FpjJhkIxA5eeuy7Opa5SPgkhH oEeiL/T3dZdwvaiEAz+fYUSVEGWXjdrRZHwdJTeqdya75tXrUgw4IzrSc3RT5rtp N9L1D0AnKBiF8kOB1WxAljddCSRlbtVThQZQQmZIehjeYVq7mEFQlqI8JPdT+H44 fTzHy8UVG+3AkbrbfQtrV8jNg/zidJWl9Noi0982rSg76Dmmia8brlKl/dTplh7b hCc8Lkp7lvx2brHRw2tUYlqUycgngnGSat2lx/JkH6XEi71kMJQ/RIia5KuTDUX1 oEnBM8cTur7hjg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudegledgudegjecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghu shculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrth htvghrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffev gfeknecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepih htshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 14 Sep 2021 16:37:53 -0400 (EDT) From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH RFC 04/13] hw/nvme: move nvm namespace members to separate struct Date: Tue, 14 Sep 2021 22:37:28 +0200 Message-Id: <20210914203737.182571-5-its@irrelevant.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210914203737.182571-1-its@irrelevant.dk> References: <20210914203737.182571-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.19; envelope-from=its@irrelevant.dk; helo=wout3-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , qemu-block@nongnu.org, Klaus Jensen , Markus Armbruster , Klaus Jensen , Hanna Reitz , Stefan Hajnoczi , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 282 +++++++++++++++++++++++++++---------------------- hw/nvme/dif.c | 101 +++++++++--------- hw/nvme/dif.h | 12 +-- hw/nvme/ns.c | 72 +++++++------ hw/nvme/nvme.h | 45 +++++--- 5 files changed, 290 insertions(+), 222 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index 4c30823d389f..7f41181aafa1 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -528,11 +528,11 @@ static inline void nvme_sg_unmap(NvmeSg *sg) * holds both data and metadata. This function splits the data and metadata * into two separate QSG/IOVs. */ -static void nvme_sg_split(NvmeSg *sg, NvmeNamespace *ns, NvmeSg *data, +static void nvme_sg_split(NvmeSg *sg, NvmeNamespaceNvm *nvm, NvmeSg *data, NvmeSg *mdata) { NvmeSg *dst = data; - uint32_t trans_len, count = ns->lbasz; + uint32_t trans_len, count = nvm->lbasz; uint64_t offset = 0; bool dma = sg->flags & NVME_SG_DMA; size_t sge_len; @@ -564,7 +564,7 @@ static void nvme_sg_split(NvmeSg *sg, NvmeNamespace *ns, NvmeSg *data, if (count == 0) { dst = (dst == data) ? mdata : data; - count = (dst == data) ? ns->lbasz : ns->lbaf.ms; + count = (dst == data) ? nvm->lbasz : nvm->lbaf.ms; } if (sge_len == offset) { @@ -1029,17 +1029,17 @@ static uint16_t nvme_map_mptr(NvmeCtrl *n, NvmeSg *sg, size_t len, static uint16_t nvme_map_data(NvmeCtrl *n, uint32_t nlb, NvmeRequest *req) { - NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(req->ns); NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; - bool pi = !!NVME_ID_NS_DPS_TYPE(ns->id_ns.dps); + bool pi = !!NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps); bool pract = !!(le16_to_cpu(rw->control) & NVME_RW_PRINFO_PRACT); - size_t len = nvme_l2b(ns, nlb); + size_t len = nvme_l2b(nvm, nlb); uint16_t status; - if (nvme_ns_ext(ns) && !(pi && pract && ns->lbaf.ms == 8)) { + if (nvme_ns_ext(nvm) && !(pi && pract && nvm->lbaf.ms == 8)) { NvmeSg sg; - len += nvme_m2b(ns, nlb); + len += nvme_m2b(nvm, nlb); status = nvme_map_dptr(n, &sg, len, &req->cmd); if (status) { @@ -1047,7 +1047,7 @@ static uint16_t nvme_map_data(NvmeCtrl *n, uint32_t nlb, NvmeRequest *req) } nvme_sg_init(n, &req->sg, sg.flags & NVME_SG_DMA); - nvme_sg_split(&sg, ns, &req->sg, NULL); + nvme_sg_split(&sg, nvm, &req->sg, NULL); nvme_sg_unmap(&sg); return NVME_SUCCESS; @@ -1058,14 +1058,14 @@ static uint16_t nvme_map_data(NvmeCtrl *n, uint32_t nlb, NvmeRequest *req) static uint16_t nvme_map_mdata(NvmeCtrl *n, uint32_t nlb, NvmeRequest *req) { - NvmeNamespace *ns = req->ns; - size_t len = nvme_m2b(ns, nlb); + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(req->ns); + size_t len = nvme_m2b(nvm, nlb); uint16_t status; - if (nvme_ns_ext(ns)) { + if (nvme_ns_ext(nvm)) { NvmeSg sg; - len += nvme_l2b(ns, nlb); + len += nvme_l2b(nvm, nlb); status = nvme_map_dptr(n, &sg, len, &req->cmd); if (status) { @@ -1073,7 +1073,7 @@ static uint16_t nvme_map_mdata(NvmeCtrl *n, uint32_t nlb, NvmeRequest *req) } nvme_sg_init(n, &req->sg, sg.flags & NVME_SG_DMA); - nvme_sg_split(&sg, ns, NULL, &req->sg); + nvme_sg_split(&sg, nvm, NULL, &req->sg); nvme_sg_unmap(&sg); return NVME_SUCCESS; @@ -1209,14 +1209,14 @@ static inline uint16_t nvme_h2c(NvmeCtrl *n, uint8_t *ptr, uint32_t len, uint16_t nvme_bounce_data(NvmeCtrl *n, uint8_t *ptr, uint32_t len, NvmeTxDirection dir, NvmeRequest *req) { - NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(req->ns); NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; - bool pi = !!NVME_ID_NS_DPS_TYPE(ns->id_ns.dps); + bool pi = !!NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps); bool pract = !!(le16_to_cpu(rw->control) & NVME_RW_PRINFO_PRACT); - if (nvme_ns_ext(ns) && !(pi && pract && ns->lbaf.ms == 8)) { - return nvme_tx_interleaved(n, &req->sg, ptr, len, ns->lbasz, - ns->lbaf.ms, 0, dir); + if (nvme_ns_ext(nvm) && !(pi && pract && nvm->lbaf.ms == 8)) { + return nvme_tx_interleaved(n, &req->sg, ptr, len, nvm->lbasz, + nvm->lbaf.ms, 0, dir); } return nvme_tx(n, &req->sg, ptr, len, dir); @@ -1225,12 +1225,12 @@ uint16_t nvme_bounce_data(NvmeCtrl *n, uint8_t *ptr, uint32_t len, uint16_t nvme_bounce_mdata(NvmeCtrl *n, uint8_t *ptr, uint32_t len, NvmeTxDirection dir, NvmeRequest *req) { - NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(req->ns); uint16_t status; - if (nvme_ns_ext(ns)) { - return nvme_tx_interleaved(n, &req->sg, ptr, len, ns->lbaf.ms, - ns->lbasz, ns->lbasz, dir); + if (nvme_ns_ext(nvm)) { + return nvme_tx_interleaved(n, &req->sg, ptr, len, nvm->lbaf.ms, + nvm->lbasz, nvm->lbasz, dir); } nvme_sg_unmap(&req->sg); @@ -1448,10 +1448,10 @@ static inline uint16_t nvme_check_mdts(NvmeCtrl *n, size_t len) return NVME_SUCCESS; } -static inline uint16_t nvme_check_bounds(NvmeNamespace *ns, uint64_t slba, +static inline uint16_t nvme_check_bounds(NvmeNamespaceNvm *nvm, uint64_t slba, uint32_t nlb) { - uint64_t nsze = le64_to_cpu(ns->id_ns.nsze); + uint64_t nsze = le64_to_cpu(nvm->id_ns.nsze); if (unlikely(UINT64_MAX - slba < nlb || slba + nlb > nsze)) { trace_pci_nvme_err_invalid_lba_range(slba, nlb, nsze); @@ -1464,10 +1464,11 @@ static inline uint16_t nvme_check_bounds(NvmeNamespace *ns, uint64_t slba, static int nvme_block_status_all(NvmeNamespace *ns, uint64_t slba, uint32_t nlb, int flags) { + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); BlockDriverState *bs = blk_bs(ns->blkconf.blk); - int64_t pnum = 0, bytes = nvme_l2b(ns, nlb); - int64_t offset = nvme_l2b(ns, slba); + int64_t pnum = 0, bytes = nvme_l2b(nvm, nlb); + int64_t offset = nvme_l2b(nvm, slba); int ret; /* @@ -1888,6 +1889,7 @@ static void nvme_rw_cb(void *opaque, int ret) { NvmeRequest *req = opaque; NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); BlockBackend *blk = ns->blkconf.blk; @@ -1897,14 +1899,14 @@ static void nvme_rw_cb(void *opaque, int ret) goto out; } - if (ns->lbaf.ms) { + if (nvm->lbaf.ms) { NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; uint64_t slba = le64_to_cpu(rw->slba); uint32_t nlb = (uint32_t)le16_to_cpu(rw->nlb) + 1; - uint64_t offset = nvme_moff(ns, slba); + uint64_t offset = nvme_moff(nvm, slba); if (req->cmd.opcode == NVME_CMD_WRITE_ZEROES) { - size_t mlen = nvme_m2b(ns, nlb); + size_t mlen = nvme_m2b(nvm, nlb); req->aiocb = blk_aio_pwrite_zeroes(blk, offset, mlen, BDRV_REQ_MAY_UNMAP, @@ -1912,7 +1914,7 @@ static void nvme_rw_cb(void *opaque, int ret) return; } - if (nvme_ns_ext(ns) || req->cmd.mptr) { + if (nvme_ns_ext(nvm) || req->cmd.mptr) { uint16_t status; nvme_sg_unmap(&req->sg); @@ -1939,6 +1941,7 @@ static void nvme_verify_cb(void *opaque, int ret) NvmeBounceContext *ctx = opaque; NvmeRequest *req = ctx->req; NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); BlockBackend *blk = ns->blkconf.blk; BlockAcctCookie *acct = &req->acct; BlockAcctStats *stats = blk_get_stats(blk); @@ -1960,7 +1963,7 @@ static void nvme_verify_cb(void *opaque, int ret) block_acct_done(stats, acct); - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { status = nvme_dif_mangle_mdata(ns, ctx->mdata.bounce, ctx->mdata.iov.size, slba); if (status) { @@ -1968,7 +1971,7 @@ static void nvme_verify_cb(void *opaque, int ret) goto out; } - req->status = nvme_dif_check(ns, ctx->data.bounce, ctx->data.iov.size, + req->status = nvme_dif_check(nvm, ctx->data.bounce, ctx->data.iov.size, ctx->mdata.bounce, ctx->mdata.iov.size, prinfo, slba, apptag, appmask, &reftag); } @@ -1991,11 +1994,12 @@ static void nvme_verify_mdata_in_cb(void *opaque, int ret) NvmeBounceContext *ctx = opaque; NvmeRequest *req = ctx->req; NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; uint64_t slba = le64_to_cpu(rw->slba); uint32_t nlb = le16_to_cpu(rw->nlb) + 1; - size_t mlen = nvme_m2b(ns, nlb); - uint64_t offset = nvme_moff(ns, slba); + size_t mlen = nvme_m2b(nvm, nlb); + uint64_t offset = nvme_moff(nvm, slba); BlockBackend *blk = ns->blkconf.blk; trace_pci_nvme_verify_mdata_in_cb(nvme_cid(req), blk_name(blk)); @@ -2033,6 +2037,7 @@ static void nvme_compare_mdata_cb(void *opaque, int ret) { NvmeRequest *req = opaque; NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeCtrl *n = nvme_ctrl(req); NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; uint8_t prinfo = NVME_RW_PRINFO(le16_to_cpu(rw->control)); @@ -2063,14 +2068,14 @@ static void nvme_compare_mdata_cb(void *opaque, int ret) goto out; } - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { uint64_t slba = le64_to_cpu(rw->slba); uint8_t *bufp; uint8_t *mbufp = ctx->mdata.bounce; uint8_t *end = mbufp + ctx->mdata.iov.size; int16_t pil = 0; - status = nvme_dif_check(ns, ctx->data.bounce, ctx->data.iov.size, + status = nvme_dif_check(nvm, ctx->data.bounce, ctx->data.iov.size, ctx->mdata.bounce, ctx->mdata.iov.size, prinfo, slba, apptag, appmask, &reftag); if (status) { @@ -2082,12 +2087,12 @@ static void nvme_compare_mdata_cb(void *opaque, int ret) * When formatted with protection information, do not compare the DIF * tuple. */ - if (!(ns->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { - pil = ns->lbaf.ms - sizeof(NvmeDifTuple); + if (!(nvm->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { + pil = nvm->lbaf.ms - sizeof(NvmeDifTuple); } - for (bufp = buf; mbufp < end; bufp += ns->lbaf.ms, mbufp += ns->lbaf.ms) { - if (memcmp(bufp + pil, mbufp + pil, ns->lbaf.ms - pil)) { + for (bufp = buf; mbufp < end; bufp += nvm->lbaf.ms, mbufp += nvm->lbaf.ms) { + if (memcmp(bufp + pil, mbufp + pil, nvm->lbaf.ms - pil)) { req->status = NVME_CMP_FAILURE; goto out; } @@ -2120,6 +2125,7 @@ static void nvme_compare_data_cb(void *opaque, int ret) NvmeRequest *req = opaque; NvmeCtrl *n = nvme_ctrl(req); NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); BlockBackend *blk = ns->blkconf.blk; BlockAcctCookie *acct = &req->acct; BlockAcctStats *stats = blk_get_stats(blk); @@ -2150,12 +2156,12 @@ static void nvme_compare_data_cb(void *opaque, int ret) goto out; } - if (ns->lbaf.ms) { + if (nvm->lbaf.ms) { NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; uint64_t slba = le64_to_cpu(rw->slba); uint32_t nlb = le16_to_cpu(rw->nlb) + 1; - size_t mlen = nvme_m2b(ns, nlb); - uint64_t offset = nvme_moff(ns, slba); + size_t mlen = nvme_m2b(nvm, nlb); + uint64_t offset = nvme_moff(nvm, slba); ctx->mdata.bounce = g_malloc(mlen); @@ -2232,6 +2238,7 @@ static void nvme_dsm_md_cb(void *opaque, int ret) NvmeDSMAIOCB *iocb = opaque; NvmeRequest *req = iocb->req; NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeDsmRange *range; uint64_t slba; uint32_t nlb; @@ -2241,7 +2248,7 @@ static void nvme_dsm_md_cb(void *opaque, int ret) goto done; } - if (!ns->lbaf.ms) { + if (!nvm->lbaf.ms) { nvme_dsm_cb(iocb, 0); return; } @@ -2265,8 +2272,8 @@ static void nvme_dsm_md_cb(void *opaque, int ret) nvme_dsm_cb(iocb, 0); } - iocb->aiocb = blk_aio_pwrite_zeroes(ns->blkconf.blk, nvme_moff(ns, slba), - nvme_m2b(ns, nlb), BDRV_REQ_MAY_UNMAP, + iocb->aiocb = blk_aio_pwrite_zeroes(ns->blkconf.blk, nvme_moff(nvm, slba), + nvme_m2b(nvm, nlb), BDRV_REQ_MAY_UNMAP, nvme_dsm_cb, iocb); return; @@ -2281,6 +2288,7 @@ static void nvme_dsm_cb(void *opaque, int ret) NvmeRequest *req = iocb->req; NvmeCtrl *n = nvme_ctrl(req); NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeDsmRange *range; uint64_t slba; uint32_t nlb; @@ -2306,14 +2314,14 @@ next: goto next; } - if (nvme_check_bounds(ns, slba, nlb)) { + if (nvme_check_bounds(nvm, slba, nlb)) { trace_pci_nvme_err_invalid_lba_range(slba, nlb, - ns->id_ns.nsze); + nvm->id_ns.nsze); goto next; } - iocb->aiocb = blk_aio_pdiscard(ns->blkconf.blk, nvme_l2b(ns, slba), - nvme_l2b(ns, nlb), + iocb->aiocb = blk_aio_pdiscard(ns->blkconf.blk, nvme_l2b(nvm, slba), + nvme_l2b(nvm, nlb), nvme_dsm_md_cb, iocb); return; @@ -2362,11 +2370,12 @@ static uint16_t nvme_verify(NvmeCtrl *n, NvmeRequest *req) { NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); BlockBackend *blk = ns->blkconf.blk; uint64_t slba = le64_to_cpu(rw->slba); uint32_t nlb = le16_to_cpu(rw->nlb) + 1; - size_t len = nvme_l2b(ns, nlb); - int64_t offset = nvme_l2b(ns, slba); + size_t len = nvme_l2b(nvm, nlb); + int64_t offset = nvme_l2b(nvm, slba); uint8_t prinfo = NVME_RW_PRINFO(le16_to_cpu(rw->control)); uint32_t reftag = le32_to_cpu(rw->reftag); NvmeBounceContext *ctx = NULL; @@ -2374,8 +2383,8 @@ static uint16_t nvme_verify(NvmeCtrl *n, NvmeRequest *req) trace_pci_nvme_verify(nvme_cid(req), nvme_nsid(ns), slba, nlb); - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { - status = nvme_check_prinfo(ns, prinfo, slba, reftag); + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { + status = nvme_check_prinfo(nvm, prinfo, slba, reftag); if (status) { return status; } @@ -2389,7 +2398,7 @@ static uint16_t nvme_verify(NvmeCtrl *n, NvmeRequest *req) return NVME_INVALID_FIELD | NVME_DNR; } - status = nvme_check_bounds(ns, slba, nlb); + status = nvme_check_bounds(nvm, slba, nlb); if (status) { return status; } @@ -2519,6 +2528,7 @@ static void nvme_copy_out_cb(void *opaque, int ret) NvmeCopyAIOCB *iocb = opaque; NvmeRequest *req = iocb->req; NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeCopySourceRange *range; uint32_t nlb; size_t mlen; @@ -2531,7 +2541,7 @@ static void nvme_copy_out_cb(void *opaque, int ret) goto out; } - if (!ns->lbaf.ms) { + if (!nvm->lbaf.ms) { nvme_copy_out_completed_cb(iocb, 0); return; } @@ -2539,13 +2549,13 @@ static void nvme_copy_out_cb(void *opaque, int ret) range = &iocb->ranges[iocb->idx]; nlb = le32_to_cpu(range->nlb) + 1; - mlen = nvme_m2b(ns, nlb); - mbounce = iocb->bounce + nvme_l2b(ns, nlb); + mlen = nvme_m2b(nvm, nlb); + mbounce = iocb->bounce + nvme_l2b(nvm, nlb); qemu_iovec_reset(&iocb->iov); qemu_iovec_add(&iocb->iov, mbounce, mlen); - iocb->aiocb = blk_aio_pwritev(ns->blkconf.blk, nvme_moff(ns, iocb->slba), + iocb->aiocb = blk_aio_pwritev(ns->blkconf.blk, nvme_moff(nvm, iocb->slba), &iocb->iov, 0, nvme_copy_out_completed_cb, iocb); @@ -2560,6 +2570,7 @@ static void nvme_copy_in_completed_cb(void *opaque, int ret) NvmeCopyAIOCB *iocb = opaque; NvmeRequest *req = iocb->req; NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeCopySourceRange *range; uint32_t nlb; size_t len; @@ -2574,11 +2585,11 @@ static void nvme_copy_in_completed_cb(void *opaque, int ret) range = &iocb->ranges[iocb->idx]; nlb = le32_to_cpu(range->nlb) + 1; - len = nvme_l2b(ns, nlb); + len = nvme_l2b(nvm, nlb); trace_pci_nvme_copy_out(iocb->slba, nlb); - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { NvmeCopyCmd *copy = (NvmeCopyCmd *)&req->cmd; uint16_t prinfor = ((copy->control[0] >> 4) & 0xf); @@ -2589,10 +2600,10 @@ static void nvme_copy_in_completed_cb(void *opaque, int ret) uint32_t reftag = le32_to_cpu(range->reftag); uint64_t slba = le64_to_cpu(range->slba); - size_t mlen = nvme_m2b(ns, nlb); - uint8_t *mbounce = iocb->bounce + nvme_l2b(ns, nlb); + size_t mlen = nvme_m2b(nvm, nlb); + uint8_t *mbounce = iocb->bounce + nvme_l2b(nvm, nlb); - status = nvme_dif_check(ns, iocb->bounce, len, mbounce, mlen, prinfor, + status = nvme_dif_check(nvm, iocb->bounce, len, mbounce, mlen, prinfor, slba, apptag, appmask, &reftag); if (status) { goto invalid; @@ -2602,15 +2613,15 @@ static void nvme_copy_in_completed_cb(void *opaque, int ret) appmask = le16_to_cpu(copy->appmask); if (prinfow & NVME_PRINFO_PRACT) { - status = nvme_check_prinfo(ns, prinfow, iocb->slba, iocb->reftag); + status = nvme_check_prinfo(nvm, prinfow, iocb->slba, iocb->reftag); if (status) { goto invalid; } - nvme_dif_pract_generate_dif(ns, iocb->bounce, len, mbounce, mlen, + nvme_dif_pract_generate_dif(nvm, iocb->bounce, len, mbounce, mlen, apptag, &iocb->reftag); } else { - status = nvme_dif_check(ns, iocb->bounce, len, mbounce, mlen, + status = nvme_dif_check(nvm, iocb->bounce, len, mbounce, mlen, prinfow, iocb->slba, apptag, appmask, &iocb->reftag); if (status) { @@ -2619,7 +2630,7 @@ static void nvme_copy_in_completed_cb(void *opaque, int ret) } } - status = nvme_check_bounds(ns, iocb->slba, nlb); + status = nvme_check_bounds(nvm, iocb->slba, nlb); if (status) { goto invalid; } @@ -2636,7 +2647,7 @@ static void nvme_copy_in_completed_cb(void *opaque, int ret) qemu_iovec_reset(&iocb->iov); qemu_iovec_add(&iocb->iov, iocb->bounce, len); - iocb->aiocb = blk_aio_pwritev(ns->blkconf.blk, nvme_l2b(ns, iocb->slba), + iocb->aiocb = blk_aio_pwritev(ns->blkconf.blk, nvme_l2b(nvm, iocb->slba), &iocb->iov, 0, nvme_copy_out_cb, iocb); return; @@ -2659,6 +2670,7 @@ static void nvme_copy_in_cb(void *opaque, int ret) NvmeCopyAIOCB *iocb = opaque; NvmeRequest *req = iocb->req; NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeCopySourceRange *range; uint64_t slba; uint32_t nlb; @@ -2670,7 +2682,7 @@ static void nvme_copy_in_cb(void *opaque, int ret) goto out; } - if (!ns->lbaf.ms) { + if (!nvm->lbaf.ms) { nvme_copy_in_completed_cb(iocb, 0); return; } @@ -2680,10 +2692,10 @@ static void nvme_copy_in_cb(void *opaque, int ret) nlb = le32_to_cpu(range->nlb) + 1; qemu_iovec_reset(&iocb->iov); - qemu_iovec_add(&iocb->iov, iocb->bounce + nvme_l2b(ns, nlb), - nvme_m2b(ns, nlb)); + qemu_iovec_add(&iocb->iov, iocb->bounce + nvme_l2b(nvm, nlb), + nvme_m2b(nvm, nlb)); - iocb->aiocb = blk_aio_preadv(ns->blkconf.blk, nvme_moff(ns, slba), + iocb->aiocb = blk_aio_preadv(ns->blkconf.blk, nvme_moff(nvm, slba), &iocb->iov, 0, nvme_copy_in_completed_cb, iocb); return; @@ -2697,6 +2709,7 @@ static void nvme_copy_cb(void *opaque, int ret) NvmeCopyAIOCB *iocb = opaque; NvmeRequest *req = iocb->req; NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeCopySourceRange *range; uint64_t slba; uint32_t nlb; @@ -2717,16 +2730,16 @@ static void nvme_copy_cb(void *opaque, int ret) range = &iocb->ranges[iocb->idx]; slba = le64_to_cpu(range->slba); nlb = le32_to_cpu(range->nlb) + 1; - len = nvme_l2b(ns, nlb); + len = nvme_l2b(nvm, nlb); trace_pci_nvme_copy_source_range(slba, nlb); - if (nlb > le16_to_cpu(ns->id_ns.mssrl)) { + if (nlb > le16_to_cpu(nvm->id_ns.mssrl)) { status = NVME_CMD_SIZE_LIMIT | NVME_DNR; goto invalid; } - status = nvme_check_bounds(ns, slba, nlb); + status = nvme_check_bounds(nvm, slba, nlb); if (status) { goto invalid; } @@ -2748,7 +2761,7 @@ static void nvme_copy_cb(void *opaque, int ret) qemu_iovec_reset(&iocb->iov); qemu_iovec_add(&iocb->iov, iocb->bounce, len); - iocb->aiocb = blk_aio_preadv(ns->blkconf.blk, nvme_l2b(ns, slba), + iocb->aiocb = blk_aio_preadv(ns->blkconf.blk, nvme_l2b(nvm, slba), &iocb->iov, 0, nvme_copy_in_cb, iocb); return; @@ -2765,6 +2778,7 @@ done: static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *req) { NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeCopyCmd *copy = (NvmeCopyCmd *)&req->cmd; NvmeCopyAIOCB *iocb = blk_aio_get(&nvme_copy_aiocb_info, ns->blkconf.blk, nvme_misc_cb, req); @@ -2780,7 +2794,7 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *req) iocb->ranges = NULL; iocb->zone = NULL; - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps) && + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps) && ((prinfor & NVME_PRINFO_PRACT) != (prinfow & NVME_PRINFO_PRACT))) { status = NVME_INVALID_FIELD | NVME_DNR; goto invalid; @@ -2792,7 +2806,7 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *req) goto invalid; } - if (nr > ns->id_ns.msrc + 1) { + if (nr > nvm->id_ns.msrc + 1) { status = NVME_CMD_SIZE_LIMIT | NVME_DNR; goto invalid; } @@ -2828,8 +2842,8 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *req) iocb->nr = nr; iocb->idx = 0; iocb->reftag = le32_to_cpu(copy->reftag); - iocb->bounce = g_malloc_n(le16_to_cpu(ns->id_ns.mssrl), - ns->lbasz + ns->lbaf.ms); + iocb->bounce = g_malloc_n(le16_to_cpu(nvm->id_ns.mssrl), + nvm->lbasz + nvm->lbaf.ms); qemu_iovec_init(&iocb->iov, 1); @@ -2853,24 +2867,25 @@ static uint16_t nvme_compare(NvmeCtrl *n, NvmeRequest *req) { NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); BlockBackend *blk = ns->blkconf.blk; uint64_t slba = le64_to_cpu(rw->slba); uint32_t nlb = le16_to_cpu(rw->nlb) + 1; uint8_t prinfo = NVME_RW_PRINFO(le16_to_cpu(rw->control)); - size_t data_len = nvme_l2b(ns, nlb); + size_t data_len = nvme_l2b(nvm, nlb); size_t len = data_len; - int64_t offset = nvme_l2b(ns, slba); + int64_t offset = nvme_l2b(nvm, slba); struct nvme_compare_ctx *ctx = NULL; uint16_t status; trace_pci_nvme_compare(nvme_cid(req), nvme_nsid(ns), slba, nlb); - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps) && (prinfo & NVME_PRINFO_PRACT)) { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps) && (prinfo & NVME_PRINFO_PRACT)) { return NVME_INVALID_PROT_INFO | NVME_DNR; } - if (nvme_ns_ext(ns)) { - len += nvme_m2b(ns, nlb); + if (nvme_ns_ext(nvm)) { + len += nvme_m2b(nvm, nlb); } status = nvme_check_mdts(n, len); @@ -2878,7 +2893,7 @@ static uint16_t nvme_compare(NvmeCtrl *n, NvmeRequest *req) return status; } - status = nvme_check_bounds(ns, slba, nlb); + status = nvme_check_bounds(nvm, slba, nlb); if (status) { return status; } @@ -3051,22 +3066,23 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *req) { NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); uint64_t slba = le64_to_cpu(rw->slba); uint32_t nlb = (uint32_t)le16_to_cpu(rw->nlb) + 1; uint8_t prinfo = NVME_RW_PRINFO(le16_to_cpu(rw->control)); - uint64_t data_size = nvme_l2b(ns, nlb); + uint64_t data_size = nvme_l2b(nvm, nlb); uint64_t mapped_size = data_size; uint64_t data_offset; BlockBackend *blk = ns->blkconf.blk; uint16_t status; - if (nvme_ns_ext(ns)) { - mapped_size += nvme_m2b(ns, nlb); + if (nvme_ns_ext(nvm)) { + mapped_size += nvme_m2b(nvm, nlb); - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { bool pract = prinfo & NVME_PRINFO_PRACT; - if (pract && ns->lbaf.ms == 8) { + if (pract && nvm->lbaf.ms == 8) { mapped_size = data_size; } } @@ -3079,7 +3095,7 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *req) goto invalid; } - status = nvme_check_bounds(ns, slba, nlb); + status = nvme_check_bounds(nvm, slba, nlb); if (status) { goto invalid; } @@ -3099,7 +3115,7 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *req) } } - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { return nvme_dif_rw(n, req); } @@ -3108,7 +3124,7 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *req) goto invalid; } - data_offset = nvme_l2b(ns, slba); + data_offset = nvme_l2b(nvm, slba); block_acct_start(blk_get_stats(blk), &req->acct, data_size, BLOCK_ACCT_READ); @@ -3125,11 +3141,12 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeRequest *req, bool append, { NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); uint64_t slba = le64_to_cpu(rw->slba); uint32_t nlb = (uint32_t)le16_to_cpu(rw->nlb) + 1; uint16_t ctrl = le16_to_cpu(rw->control); uint8_t prinfo = NVME_RW_PRINFO(ctrl); - uint64_t data_size = nvme_l2b(ns, nlb); + uint64_t data_size = nvme_l2b(nvm, nlb); uint64_t mapped_size = data_size; uint64_t data_offset; NvmeZone *zone; @@ -3137,14 +3154,14 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeRequest *req, bool append, BlockBackend *blk = ns->blkconf.blk; uint16_t status; - if (nvme_ns_ext(ns)) { - mapped_size += nvme_m2b(ns, nlb); + if (nvme_ns_ext(nvm)) { + mapped_size += nvme_m2b(nvm, nlb); - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { bool pract = prinfo & NVME_PRINFO_PRACT; - if (pract && ns->lbaf.ms == 8) { - mapped_size -= nvme_m2b(ns, nlb); + if (pract && nvm->lbaf.ms == 8) { + mapped_size -= nvme_m2b(nvm, nlb); } } } @@ -3159,7 +3176,7 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeRequest *req, bool append, } } - status = nvme_check_bounds(ns, slba, nlb); + status = nvme_check_bounds(nvm, slba, nlb); if (status) { goto invalid; } @@ -3189,7 +3206,7 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeRequest *req, bool append, rw->slba = cpu_to_le64(slba); res->slba = cpu_to_le64(slba); - switch (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + switch (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { case NVME_ID_NS_DPS_TYPE_1: if (!piremap) { return NVME_INVALID_PROT_INFO | NVME_DNR; @@ -3227,9 +3244,9 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeRequest *req, bool append, zone->w_ptr += nlb; } - data_offset = nvme_l2b(ns, slba); + data_offset = nvme_l2b(nvm, slba); - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { return nvme_dif_rw(n, req); } @@ -3273,6 +3290,7 @@ static inline uint16_t nvme_zone_append(NvmeCtrl *n, NvmeRequest *req) static uint16_t nvme_get_mgmt_zone_slba_idx(NvmeNamespace *ns, NvmeCmd *c, uint64_t *slba, uint32_t *zone_idx) { + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeNamespaceZoned *zoned; uint32_t dw10 = le32_to_cpu(c->cdw10); @@ -3286,8 +3304,8 @@ static uint16_t nvme_get_mgmt_zone_slba_idx(NvmeNamespace *ns, NvmeCmd *c, zoned = NVME_NAMESPACE_ZONED(ns); *slba = ((uint64_t)dw11) << 32 | dw10; - if (unlikely(*slba >= ns->id_ns.nsze)) { - trace_pci_nvme_err_invalid_lba_range(*slba, 0, ns->id_ns.nsze); + if (unlikely(*slba >= nvm->id_ns.nsze)) { + trace_pci_nvme_err_invalid_lba_range(*slba, 0, nvm->id_ns.nsze); *slba = 0; return NVME_LBA_RANGE | NVME_DNR; } @@ -3506,6 +3524,8 @@ static void nvme_zone_reset_epilogue_cb(void *opaque, int ret) NvmeZoneResetAIOCB *iocb = opaque; NvmeRequest *req = iocb->req; NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); int64_t moff; int count; @@ -3514,13 +3534,13 @@ static void nvme_zone_reset_epilogue_cb(void *opaque, int ret) return; } - if (!ns->lbaf.ms) { + if (!nvm->lbaf.ms) { nvme_zone_reset_cb(iocb, 0); return; } - moff = nvme_moff(ns, iocb->zone->d.zslba); - count = nvme_m2b(ns, NVME_NAMESPACE_ZONED(ns)->zone_size); + moff = nvme_moff(nvm, iocb->zone->d.zslba); + count = nvme_m2b(nvm, zoned->zone_size); iocb->aiocb = blk_aio_pwrite_zeroes(ns->blkconf.blk, moff, count, BDRV_REQ_MAY_UNMAP, @@ -3533,6 +3553,7 @@ static void nvme_zone_reset_cb(void *opaque, int ret) NvmeZoneResetAIOCB *iocb = opaque; NvmeRequest *req = iocb->req; NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); if (ret < 0) { @@ -3573,8 +3594,8 @@ static void nvme_zone_reset_cb(void *opaque, int ret) trace_pci_nvme_zns_zone_reset(zone->d.zslba); iocb->aiocb = blk_aio_pwrite_zeroes(ns->blkconf.blk, - nvme_l2b(ns, zone->d.zslba), - nvme_l2b(ns, zoned->zone_size), + nvme_l2b(nvm, zone->d.zslba), + nvme_l2b(nvm, zoned->zone_size), BDRV_REQ_MAY_UNMAP, nvme_zone_reset_epilogue_cb, iocb); @@ -4471,7 +4492,8 @@ static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeRequest *req, bool active) } if (active || ns->csi == NVME_CSI_NVM) { - return nvme_c2h(n, (uint8_t *)&ns->id_ns, sizeof(NvmeIdNs), req); + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); + return nvme_c2h(n, (uint8_t *)&nvm->id_ns, sizeof(NvmeIdNs), req); } return NVME_INVALID_CMD_SET | NVME_DNR; @@ -4990,6 +5012,7 @@ static uint16_t nvme_set_feature_timestamp(NvmeCtrl *n, NvmeRequest *req) static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req) { NvmeNamespace *ns = NULL; + NvmeNamespaceNvm *nvm; NvmeCmd *cmd = &req->cmd; uint32_t dw10 = le32_to_cpu(cmd->cdw10); @@ -5064,7 +5087,9 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req) continue; } - if (NVME_ID_NS_NSFEAT_DULBE(ns->id_ns.nsfeat)) { + nvm = NVME_NAMESPACE_NVM(ns); + + if (NVME_ID_NS_NSFEAT_DULBE(nvm->id_ns.nsfeat)) { ns->features.err_rec = dw11; } } @@ -5073,7 +5098,8 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req) } assert(ns); - if (NVME_ID_NS_NSFEAT_DULBE(ns->id_ns.nsfeat)) { + nvm = NVME_NAMESPACE_NVM(ns); + if (NVME_ID_NS_NSFEAT_DULBE(nvm->id_ns.nsfeat)) { ns->features.err_rec = dw11; } break; @@ -5155,12 +5181,15 @@ static void nvme_update_dmrsl(NvmeCtrl *n) for (nsid = 1; nsid <= NVME_MAX_NAMESPACES; nsid++) { NvmeNamespace *ns = nvme_ns(n, nsid); + NvmeNamespaceNvm *nvm; if (!ns) { continue; } + nvm = NVME_NAMESPACE_NVM(ns); + n->dmrsl = MIN_NON_ZERO(n->dmrsl, - BDRV_REQUEST_MAX_BYTES / nvme_l2b(ns, 1)); + BDRV_REQUEST_MAX_BYTES / nvme_l2b(nvm, 1)); } } @@ -5302,6 +5331,7 @@ static const AIOCBInfo nvme_format_aiocb_info = { static void nvme_format_set(NvmeNamespace *ns, NvmeCmd *cmd) { + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); uint32_t dw10 = le32_to_cpu(cmd->cdw10); uint8_t lbaf = dw10 & 0xf; uint8_t pi = (dw10 >> 5) & 0x7; @@ -5310,8 +5340,8 @@ static void nvme_format_set(NvmeNamespace *ns, NvmeCmd *cmd) trace_pci_nvme_format_set(ns->params.nsid, lbaf, mset, pi, pil); - ns->id_ns.dps = (pil << 3) | pi; - ns->id_ns.flbas = lbaf | (mset << 4); + nvm->id_ns.dps = (pil << 3) | pi; + nvm->id_ns.flbas = lbaf | (mset << 4); nvme_ns_init_format(ns); } @@ -5321,6 +5351,7 @@ static void nvme_format_ns_cb(void *opaque, int ret) NvmeFormatAIOCB *iocb = opaque; NvmeRequest *req = iocb->req; NvmeNamespace *ns = iocb->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); int bytes; if (ret < 0) { @@ -5330,8 +5361,8 @@ static void nvme_format_ns_cb(void *opaque, int ret) assert(ns); - if (iocb->offset < ns->size) { - bytes = MIN(BDRV_REQUEST_MAX_BYTES, ns->size - iocb->offset); + if (iocb->offset < nvm->size) { + bytes = MIN(BDRV_REQUEST_MAX_BYTES, nvm->size - iocb->offset); iocb->aiocb = blk_aio_pwrite_zeroes(ns->blkconf.blk, iocb->offset, bytes, BDRV_REQ_MAY_UNMAP, @@ -5353,15 +5384,17 @@ done: static uint16_t nvme_format_check(NvmeNamespace *ns, uint8_t lbaf, uint8_t pi) { + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); + if (nvme_ns_zoned(ns)) { return NVME_INVALID_FORMAT | NVME_DNR; } - if (lbaf > ns->id_ns.nlbaf) { + if (lbaf > nvm->id_ns.nlbaf) { return NVME_INVALID_FORMAT | NVME_DNR; } - if (pi && (ns->id_ns.lbaf[lbaf].ms < sizeof(NvmeDifTuple))) { + if (pi && (nvm->id_ns.lbaf[lbaf].ms < sizeof(NvmeDifTuple))) { return NVME_INVALID_FORMAT | NVME_DNR; } @@ -6510,6 +6543,7 @@ static int nvme_init_subsys(NvmeCtrl *n, Error **errp) void nvme_attach_ns(NvmeCtrl *n, NvmeNamespace *ns) { + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); uint32_t nsid = ns->params.nsid; assert(nsid && nsid <= NVME_MAX_NAMESPACES); @@ -6517,7 +6551,7 @@ void nvme_attach_ns(NvmeCtrl *n, NvmeNamespace *ns) ns->attached++; n->dmrsl = MIN_NON_ZERO(n->dmrsl, - BDRV_REQUEST_MAX_BYTES / nvme_l2b(ns, 1)); + BDRV_REQUEST_MAX_BYTES / nvme_l2b(nvm, 1)); } static void nvme_realize(PCIDevice *pci_dev, Error **errp) diff --git a/hw/nvme/dif.c b/hw/nvme/dif.c index cd0cea2b5ebd..26c7412eb523 100644 --- a/hw/nvme/dif.c +++ b/hw/nvme/dif.c @@ -16,10 +16,10 @@ #include "dif.h" #include "trace.h" -uint16_t nvme_check_prinfo(NvmeNamespace *ns, uint8_t prinfo, uint64_t slba, - uint32_t reftag) +uint16_t nvme_check_prinfo(NvmeNamespaceNvm *nvm, uint8_t prinfo, + uint64_t slba, uint32_t reftag) { - if ((NVME_ID_NS_DPS_TYPE(ns->id_ns.dps) == NVME_ID_NS_DPS_TYPE_1) && + if ((NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps) == NVME_ID_NS_DPS_TYPE_1) && (prinfo & NVME_PRINFO_PRCHK_REF) && (slba & 0xffffffff) != reftag) { return NVME_INVALID_PROT_INFO | NVME_DNR; } @@ -40,23 +40,23 @@ static uint16_t crc_t10dif(uint16_t crc, const unsigned char *buffer, return crc; } -void nvme_dif_pract_generate_dif(NvmeNamespace *ns, uint8_t *buf, size_t len, - uint8_t *mbuf, size_t mlen, uint16_t apptag, - uint32_t *reftag) +void nvme_dif_pract_generate_dif(NvmeNamespaceNvm *nvm, uint8_t *buf, + size_t len, uint8_t *mbuf, size_t mlen, + uint16_t apptag, uint32_t *reftag) { uint8_t *end = buf + len; int16_t pil = 0; - if (!(ns->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { - pil = ns->lbaf.ms - sizeof(NvmeDifTuple); + if (!(nvm->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { + pil = nvm->lbaf.ms - sizeof(NvmeDifTuple); } - trace_pci_nvme_dif_pract_generate_dif(len, ns->lbasz, ns->lbasz + pil, + trace_pci_nvme_dif_pract_generate_dif(len, nvm->lbasz, nvm->lbasz + pil, apptag, *reftag); - for (; buf < end; buf += ns->lbasz, mbuf += ns->lbaf.ms) { + for (; buf < end; buf += nvm->lbasz, mbuf += nvm->lbaf.ms) { NvmeDifTuple *dif = (NvmeDifTuple *)(mbuf + pil); - uint16_t crc = crc_t10dif(0x0, buf, ns->lbasz); + uint16_t crc = crc_t10dif(0x0, buf, nvm->lbasz); if (pil) { crc = crc_t10dif(crc, mbuf, pil); @@ -66,18 +66,18 @@ void nvme_dif_pract_generate_dif(NvmeNamespace *ns, uint8_t *buf, size_t len, dif->apptag = cpu_to_be16(apptag); dif->reftag = cpu_to_be32(*reftag); - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps) != NVME_ID_NS_DPS_TYPE_3) { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps) != NVME_ID_NS_DPS_TYPE_3) { (*reftag)++; } } } -static uint16_t nvme_dif_prchk(NvmeNamespace *ns, NvmeDifTuple *dif, +static uint16_t nvme_dif_prchk(NvmeNamespaceNvm *nvm, NvmeDifTuple *dif, uint8_t *buf, uint8_t *mbuf, size_t pil, uint8_t prinfo, uint16_t apptag, uint16_t appmask, uint32_t reftag) { - switch (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + switch (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { case NVME_ID_NS_DPS_TYPE_3: if (be32_to_cpu(dif->reftag) != 0xffffffff) { break; @@ -97,7 +97,7 @@ static uint16_t nvme_dif_prchk(NvmeNamespace *ns, NvmeDifTuple *dif, } if (prinfo & NVME_PRINFO_PRCHK_GUARD) { - uint16_t crc = crc_t10dif(0x0, buf, ns->lbasz); + uint16_t crc = crc_t10dif(0x0, buf, nvm->lbasz); if (pil) { crc = crc_t10dif(crc, mbuf, pil); @@ -130,7 +130,7 @@ static uint16_t nvme_dif_prchk(NvmeNamespace *ns, NvmeDifTuple *dif, return NVME_SUCCESS; } -uint16_t nvme_dif_check(NvmeNamespace *ns, uint8_t *buf, size_t len, +uint16_t nvme_dif_check(NvmeNamespaceNvm *nvm, uint8_t *buf, size_t len, uint8_t *mbuf, size_t mlen, uint8_t prinfo, uint64_t slba, uint16_t apptag, uint16_t appmask, uint32_t *reftag) @@ -139,27 +139,27 @@ uint16_t nvme_dif_check(NvmeNamespace *ns, uint8_t *buf, size_t len, int16_t pil = 0; uint16_t status; - status = nvme_check_prinfo(ns, prinfo, slba, *reftag); + status = nvme_check_prinfo(nvm, prinfo, slba, *reftag); if (status) { return status; } - if (!(ns->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { - pil = ns->lbaf.ms - sizeof(NvmeDifTuple); + if (!(nvm->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { + pil = nvm->lbaf.ms - sizeof(NvmeDifTuple); } - trace_pci_nvme_dif_check(prinfo, ns->lbasz + pil); + trace_pci_nvme_dif_check(prinfo, nvm->lbasz + pil); - for (; buf < end; buf += ns->lbasz, mbuf += ns->lbaf.ms) { + for (; buf < end; buf += nvm->lbasz, mbuf += nvm->lbaf.ms) { NvmeDifTuple *dif = (NvmeDifTuple *)(mbuf + pil); - status = nvme_dif_prchk(ns, dif, buf, mbuf, pil, prinfo, apptag, + status = nvme_dif_prchk(nvm, dif, buf, mbuf, pil, prinfo, apptag, appmask, *reftag); if (status) { return status; } - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps) != NVME_ID_NS_DPS_TYPE_3) { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps) != NVME_ID_NS_DPS_TYPE_3) { (*reftag)++; } } @@ -170,21 +170,22 @@ uint16_t nvme_dif_check(NvmeNamespace *ns, uint8_t *buf, size_t len, uint16_t nvme_dif_mangle_mdata(NvmeNamespace *ns, uint8_t *mbuf, size_t mlen, uint64_t slba) { + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); BlockBackend *blk = ns->blkconf.blk; BlockDriverState *bs = blk_bs(blk); - int64_t moffset = 0, offset = nvme_l2b(ns, slba); + int64_t moffset = 0, offset = nvme_l2b(nvm, slba); uint8_t *mbufp, *end; bool zeroed; int16_t pil = 0; - int64_t bytes = (mlen / ns->lbaf.ms) << ns->lbaf.ds; + int64_t bytes = (mlen / nvm->lbaf.ms) << nvm->lbaf.ds; int64_t pnum = 0; Error *err = NULL; - if (!(ns->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { - pil = ns->lbaf.ms - sizeof(NvmeDifTuple); + if (!(nvm->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { + pil = nvm->lbaf.ms - sizeof(NvmeDifTuple); } do { @@ -206,15 +207,15 @@ uint16_t nvme_dif_mangle_mdata(NvmeNamespace *ns, uint8_t *mbuf, size_t mlen, if (zeroed) { mbufp = mbuf + moffset; - mlen = (pnum >> ns->lbaf.ds) * ns->lbaf.ms; + mlen = (pnum >> nvm->lbaf.ds) * nvm->lbaf.ms; end = mbufp + mlen; - for (; mbufp < end; mbufp += ns->lbaf.ms) { + for (; mbufp < end; mbufp += nvm->lbaf.ms) { memset(mbufp + pil, 0xff, sizeof(NvmeDifTuple)); } } - moffset += (pnum >> ns->lbaf.ds) * ns->lbaf.ms; + moffset += (pnum >> nvm->lbaf.ds) * nvm->lbaf.ms; offset += pnum; } while (pnum != bytes); @@ -246,6 +247,7 @@ static void nvme_dif_rw_check_cb(void *opaque, int ret) NvmeBounceContext *ctx = opaque; NvmeRequest *req = ctx->req; NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeCtrl *n = nvme_ctrl(req); NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; uint64_t slba = le64_to_cpu(rw->slba); @@ -269,7 +271,7 @@ static void nvme_dif_rw_check_cb(void *opaque, int ret) goto out; } - status = nvme_dif_check(ns, ctx->data.bounce, ctx->data.iov.size, + status = nvme_dif_check(nvm, ctx->data.bounce, ctx->data.iov.size, ctx->mdata.bounce, ctx->mdata.iov.size, prinfo, slba, apptag, appmask, &reftag); if (status) { @@ -284,7 +286,7 @@ static void nvme_dif_rw_check_cb(void *opaque, int ret) goto out; } - if (prinfo & NVME_PRINFO_PRACT && ns->lbaf.ms == 8) { + if (prinfo & NVME_PRINFO_PRACT && nvm->lbaf.ms == 8) { goto out; } @@ -303,11 +305,12 @@ static void nvme_dif_rw_mdata_in_cb(void *opaque, int ret) NvmeBounceContext *ctx = opaque; NvmeRequest *req = ctx->req; NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; uint64_t slba = le64_to_cpu(rw->slba); uint32_t nlb = le16_to_cpu(rw->nlb) + 1; - size_t mlen = nvme_m2b(ns, nlb); - uint64_t offset = nvme_moff(ns, slba); + size_t mlen = nvme_m2b(nvm, nlb); + uint64_t offset = nvme_moff(nvm, slba); BlockBackend *blk = ns->blkconf.blk; trace_pci_nvme_dif_rw_mdata_in_cb(nvme_cid(req), blk_name(blk)); @@ -334,9 +337,10 @@ static void nvme_dif_rw_mdata_out_cb(void *opaque, int ret) NvmeBounceContext *ctx = opaque; NvmeRequest *req = ctx->req; NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; uint64_t slba = le64_to_cpu(rw->slba); - uint64_t offset = nvme_moff(ns, slba); + uint64_t offset = nvme_moff(nvm, slba); BlockBackend *blk = ns->blkconf.blk; trace_pci_nvme_dif_rw_mdata_out_cb(nvme_cid(req), blk_name(blk)); @@ -357,14 +361,15 @@ uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req) { NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns = req->ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); BlockBackend *blk = ns->blkconf.blk; bool wrz = rw->opcode == NVME_CMD_WRITE_ZEROES; uint32_t nlb = le16_to_cpu(rw->nlb) + 1; uint64_t slba = le64_to_cpu(rw->slba); - size_t len = nvme_l2b(ns, nlb); - size_t mlen = nvme_m2b(ns, nlb); + size_t len = nvme_l2b(nvm, nlb); + size_t mlen = nvme_m2b(nvm, nlb); size_t mapped_len = len; - int64_t offset = nvme_l2b(ns, slba); + int64_t offset = nvme_l2b(nvm, slba); uint8_t prinfo = NVME_RW_PRINFO(le16_to_cpu(rw->control)); uint16_t apptag = le16_to_cpu(rw->apptag); uint16_t appmask = le16_to_cpu(rw->appmask); @@ -388,9 +393,9 @@ uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req) if (pract) { uint8_t *mbuf, *end; - int16_t pil = ns->lbaf.ms - sizeof(NvmeDifTuple); + int16_t pil = nvm->lbaf.ms - sizeof(NvmeDifTuple); - status = nvme_check_prinfo(ns, prinfo, slba, reftag); + status = nvme_check_prinfo(nvm, prinfo, slba, reftag); if (status) { goto err; } @@ -405,17 +410,17 @@ uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req) mbuf = ctx->mdata.bounce; end = mbuf + mlen; - if (ns->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT) { + if (nvm->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT) { pil = 0; } - for (; mbuf < end; mbuf += ns->lbaf.ms) { + for (; mbuf < end; mbuf += nvm->lbaf.ms) { NvmeDifTuple *dif = (NvmeDifTuple *)(mbuf + pil); dif->apptag = cpu_to_be16(apptag); dif->reftag = cpu_to_be32(reftag); - switch (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + switch (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { case NVME_ID_NS_DPS_TYPE_1: case NVME_ID_NS_DPS_TYPE_2: reftag++; @@ -428,7 +433,7 @@ uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req) return NVME_NO_COMPLETE; } - if (nvme_ns_ext(ns) && !(pract && ns->lbaf.ms == 8)) { + if (nvme_ns_ext(nvm) && !(pract && nvm->lbaf.ms == 8)) { mapped_len += mlen; } @@ -462,7 +467,7 @@ uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req) qemu_iovec_init(&ctx->mdata.iov, 1); qemu_iovec_add(&ctx->mdata.iov, ctx->mdata.bounce, mlen); - if (!(pract && ns->lbaf.ms == 8)) { + if (!(pract && nvm->lbaf.ms == 8)) { status = nvme_bounce_mdata(n, ctx->mdata.bounce, ctx->mdata.iov.size, NVME_TX_DIRECTION_TO_DEVICE, req); if (status) { @@ -470,18 +475,18 @@ uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req) } } - status = nvme_check_prinfo(ns, prinfo, slba, reftag); + status = nvme_check_prinfo(nvm, prinfo, slba, reftag); if (status) { goto err; } if (pract) { /* splice generated protection information into the buffer */ - nvme_dif_pract_generate_dif(ns, ctx->data.bounce, ctx->data.iov.size, + nvme_dif_pract_generate_dif(nvm, ctx->data.bounce, ctx->data.iov.size, ctx->mdata.bounce, ctx->mdata.iov.size, apptag, &reftag); } else { - status = nvme_dif_check(ns, ctx->data.bounce, ctx->data.iov.size, + status = nvme_dif_check(nvm, ctx->data.bounce, ctx->data.iov.size, ctx->mdata.bounce, ctx->mdata.iov.size, prinfo, slba, apptag, appmask, &reftag); if (status) { diff --git a/hw/nvme/dif.h b/hw/nvme/dif.h index e36fea30e71e..7d47299252ae 100644 --- a/hw/nvme/dif.h +++ b/hw/nvme/dif.h @@ -37,14 +37,14 @@ static const uint16_t t10_dif_crc_table[256] = { 0xF0D8, 0x7B6F, 0x6C01, 0xE7B6, 0x42DD, 0xC96A, 0xDE04, 0x55B3 }; -uint16_t nvme_check_prinfo(NvmeNamespace *ns, uint8_t prinfo, uint64_t slba, - uint32_t reftag); +uint16_t nvme_check_prinfo(NvmeNamespaceNvm *nvm, uint8_t prinfo, + uint64_t slba, uint32_t reftag); uint16_t nvme_dif_mangle_mdata(NvmeNamespace *ns, uint8_t *mbuf, size_t mlen, uint64_t slba); -void nvme_dif_pract_generate_dif(NvmeNamespace *ns, uint8_t *buf, size_t len, - uint8_t *mbuf, size_t mlen, uint16_t apptag, - uint32_t *reftag); -uint16_t nvme_dif_check(NvmeNamespace *ns, uint8_t *buf, size_t len, +void nvme_dif_pract_generate_dif(NvmeNamespaceNvm *nvm, uint8_t *buf, + size_t len, uint8_t *mbuf, size_t mlen, + uint16_t apptag, uint32_t *reftag); +uint16_t nvme_dif_check(NvmeNamespaceNvm *nvm, uint8_t *buf, size_t len, uint8_t *mbuf, size_t mlen, uint8_t prinfo, uint64_t slba, uint16_t apptag, uint16_t appmask, uint32_t *reftag); diff --git a/hw/nvme/ns.c b/hw/nvme/ns.c index 419e501239da..0e231102c475 100644 --- a/hw/nvme/ns.c +++ b/hw/nvme/ns.c @@ -28,14 +28,15 @@ void nvme_ns_init_format(NvmeNamespace *ns) { - NvmeIdNs *id_ns = &ns->id_ns; + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); + NvmeIdNs *id_ns = &nvm->id_ns; BlockDriverInfo bdi; int npdg, nlbas, ret; - ns->lbaf = id_ns->lbaf[NVME_ID_NS_FLBAS_INDEX(id_ns->flbas)]; - ns->lbasz = 1 << ns->lbaf.ds; + nvm->lbaf = id_ns->lbaf[NVME_ID_NS_FLBAS_INDEX(id_ns->flbas)]; + nvm->lbasz = 1 << nvm->lbaf.ds; - nlbas = ns->size / (ns->lbasz + ns->lbaf.ms); + nlbas = nvm->size / (nvm->lbasz + nvm->lbaf.ms); id_ns->nsze = cpu_to_le64(nlbas); @@ -43,13 +44,13 @@ void nvme_ns_init_format(NvmeNamespace *ns) id_ns->ncap = id_ns->nsze; id_ns->nuse = id_ns->ncap; - ns->moff = (int64_t)nlbas << ns->lbaf.ds; + nvm->moff = (int64_t)nlbas << nvm->lbaf.ds; - npdg = ns->blkconf.discard_granularity / ns->lbasz; + npdg = nvm->discard_granularity / nvm->lbasz; ret = bdrv_get_info(blk_bs(ns->blkconf.blk), &bdi); - if (ret >= 0 && bdi.cluster_size > ns->blkconf.discard_granularity) { - npdg = bdi.cluster_size / ns->lbasz; + if (ret >= 0 && bdi.cluster_size > nvm->discard_granularity) { + npdg = bdi.cluster_size / nvm->lbasz; } id_ns->npda = id_ns->npdg = npdg - 1; @@ -57,8 +58,9 @@ void nvme_ns_init_format(NvmeNamespace *ns) static int nvme_ns_init(NvmeNamespace *ns, Error **errp) { + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); static uint64_t ns_count; - NvmeIdNs *id_ns = &ns->id_ns; + NvmeIdNs *id_ns = &nvm->id_ns; uint8_t ds; uint16_t ms; int i; @@ -66,7 +68,7 @@ static int nvme_ns_init(NvmeNamespace *ns, Error **errp) ns->csi = NVME_CSI_NVM; ns->status = 0x0; - ns->id_ns.dlfeat = 0x1; + nvm->id_ns.dlfeat = 0x1; /* support DULBE and I/O optimization fields */ id_ns->nsfeat |= (0x4 | 0x10); @@ -82,12 +84,12 @@ static int nvme_ns_init(NvmeNamespace *ns, Error **errp) } /* simple copy */ - id_ns->mssrl = cpu_to_le16(ns->params.mssrl); - id_ns->mcl = cpu_to_le32(ns->params.mcl); - id_ns->msrc = ns->params.msrc; + id_ns->mssrl = cpu_to_le16(nvm->mssrl); + id_ns->mcl = cpu_to_le32(nvm->mcl); + id_ns->msrc = nvm->msrc; id_ns->eui64 = cpu_to_be64(ns->params.eui64); - ds = 31 - clz32(ns->blkconf.logical_block_size); + ds = 31 - clz32(nvm->lbasz); ms = ns->params.ms; id_ns->mc = NVME_ID_NS_MC_EXTENDED | NVME_ID_NS_MC_SEPARATE; @@ -140,6 +142,7 @@ lbaf_found: static int nvme_ns_init_blk(NvmeNamespace *ns, Error **errp) { + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); bool read_only; if (!blkconf_blocksizes(&ns->blkconf, errp)) { @@ -156,9 +159,14 @@ static int nvme_ns_init_blk(NvmeNamespace *ns, Error **errp) MAX(ns->blkconf.logical_block_size, MIN_DISCARD_GRANULARITY); } - ns->size = blk_getlength(ns->blkconf.blk); - if (ns->size < 0) { - error_setg_errno(errp, -ns->size, "could not get blockdev size"); + nvm->lbasz = ns->blkconf.logical_block_size; + nvm->discard_granularity = ns->blkconf.discard_granularity; + nvm->lbaf.ds = 31 - clz32(nvm->lbasz); + nvm->lbaf.ms = ns->params.ms; + + nvm->size = blk_getlength(ns->blkconf.blk); + if (nvm->size < 0) { + error_setg_errno(errp, -nvm->size, "could not get blockdev size"); return -1; } @@ -167,6 +175,7 @@ static int nvme_ns_init_blk(NvmeNamespace *ns, Error **errp) static int nvme_ns_zoned_check_calc_geometry(NvmeNamespace *ns, Error **errp) { + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); uint64_t zone_size, zone_cap; @@ -187,14 +196,14 @@ static int nvme_ns_zoned_check_calc_geometry(NvmeNamespace *ns, Error **errp) "zone size %"PRIu64"B", zone_cap, zone_size); return -1; } - if (zone_size < ns->lbasz) { + if (zone_size < nvm->lbasz) { error_setg(errp, "zone size %"PRIu64"B too small, " - "must be at least %zuB", zone_size, ns->lbasz); + "must be at least %zuB", zone_size, nvm->lbasz); return -1; } - if (zone_cap < ns->lbasz) { + if (zone_cap < nvm->lbasz) { error_setg(errp, "zone capacity %"PRIu64"B too small, " - "must be at least %zuB", zone_cap, ns->lbasz); + "must be at least %zuB", zone_cap, nvm->lbasz); return -1; } @@ -202,9 +211,9 @@ static int nvme_ns_zoned_check_calc_geometry(NvmeNamespace *ns, Error **errp) * Save the main zone geometry values to avoid * calculating them later again. */ - zoned->zone_size = zone_size / ns->lbasz; - zoned->zone_capacity = zone_cap / ns->lbasz; - zoned->num_zones = le64_to_cpu(ns->id_ns.nsze) / zoned->zone_size; + zoned->zone_size = zone_size / nvm->lbasz; + zoned->zone_capacity = zone_cap / nvm->lbasz; + zoned->num_zones = le64_to_cpu(nvm->id_ns.nsze) / zoned->zone_size; /* Do a few more sanity checks of ZNS properties */ if (!zoned->num_zones) { @@ -258,6 +267,7 @@ static void nvme_ns_zoned_init_state(NvmeNamespaceZoned *zoned) static void nvme_ns_zoned_init(NvmeNamespace *ns) { + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); NvmeIdNsZoned *id_ns_z = &zoned->id_ns; int i; @@ -273,16 +283,16 @@ static void nvme_ns_zoned_init(NvmeNamespace *ns) id_ns_z->ozcs |= NVME_ID_NS_ZONED_OZCS_CROSS_READ; } - for (i = 0; i <= ns->id_ns.nlbaf; i++) { + for (i = 0; i <= nvm->id_ns.nlbaf; i++) { id_ns_z->lbafe[i].zsze = cpu_to_le64(zoned->zone_size); id_ns_z->lbafe[i].zdes = zoned->zd_extension_size >> 6; /* Units of 64B */ } ns->csi = NVME_CSI_ZONED; - ns->id_ns.nsze = cpu_to_le64(zoned->num_zones * zoned->zone_size); - ns->id_ns.ncap = ns->id_ns.nsze; - ns->id_ns.nuse = ns->id_ns.ncap; + nvm->id_ns.nsze = cpu_to_le64(zoned->num_zones * zoned->zone_size); + nvm->id_ns.ncap = nvm->id_ns.nsze; + nvm->id_ns.nuse = nvm->id_ns.ncap; /* * The device uses the BDRV_BLOCK_ZERO flag to determine the "deallocated" @@ -291,13 +301,13 @@ static void nvme_ns_zoned_init(NvmeNamespace *ns) * we can only support DULBE if the zone size is a multiple of the * calculated NPDG. */ - if (zoned->zone_size % (ns->id_ns.npdg + 1)) { + if (zoned->zone_size % (nvm->id_ns.npdg + 1)) { warn_report("the zone size (%"PRIu64" blocks) is not a multiple of " "the calculated deallocation granularity (%d blocks); " "DULBE support disabled", - zoned->zone_size, ns->id_ns.npdg + 1); + zoned->zone_size, nvm->id_ns.npdg + 1); - ns->id_ns.nsfeat &= ~0x4; + nvm->id_ns.nsfeat &= ~0x4; } } diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index 9cfb172101a9..c5e08cf9e1c1 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -147,15 +147,32 @@ typedef struct NvmeNamespaceZoned { QTAILQ_HEAD(, NvmeZone) full_zones; } NvmeNamespaceZoned; +enum { + NVME_NS_NVM_EXTENDED_LBA = 1 << 0, + NVME_NS_NVM_PI_FIRST = 1 << 1, +}; + +typedef struct NvmeNamespaceNvm { + NvmeIdNs id_ns; + + int64_t size; + int64_t moff; + + NvmeLBAF lbaf; + size_t lbasz; + uint32_t discard_granularity; + + uint16_t mssrl; + uint32_t mcl; + uint8_t msrc; + + unsigned long flags; +} NvmeNamespaceNvm; + typedef struct NvmeNamespace { DeviceState parent_obj; BlockConf blkconf; int32_t bootindex; - int64_t size; - int64_t moff; - NvmeIdNs id_ns; - NvmeLBAF lbaf; - size_t lbasz; const uint32_t *iocs; uint8_t csi; uint16_t status; @@ -169,9 +186,11 @@ typedef struct NvmeNamespace { uint32_t err_rec; } features; + NvmeNamespaceNvm nvm; NvmeNamespaceZoned zoned; } NvmeNamespace; +#define NVME_NAMESPACE_NVM(ns) (&(ns)->nvm) #define NVME_NAMESPACE_ZONED(ns) (&(ns)->zoned) static inline uint32_t nvme_nsid(NvmeNamespace *ns) @@ -183,24 +202,24 @@ static inline uint32_t nvme_nsid(NvmeNamespace *ns) return 0; } -static inline size_t nvme_l2b(NvmeNamespace *ns, uint64_t lba) +static inline size_t nvme_l2b(NvmeNamespaceNvm *nvm, uint64_t lba) { - return lba << ns->lbaf.ds; + return lba << nvm->lbaf.ds; } -static inline size_t nvme_m2b(NvmeNamespace *ns, uint64_t lba) +static inline size_t nvme_m2b(NvmeNamespaceNvm *nvm, uint64_t lba) { - return ns->lbaf.ms * lba; + return nvm->lbaf.ms * lba; } -static inline int64_t nvme_moff(NvmeNamespace *ns, uint64_t lba) +static inline int64_t nvme_moff(NvmeNamespaceNvm *nvm, uint64_t lba) { - return ns->moff + nvme_m2b(ns, lba); + return nvm->moff + nvme_m2b(nvm, lba); } -static inline bool nvme_ns_ext(NvmeNamespace *ns) +static inline bool nvme_ns_ext(NvmeNamespaceNvm *nvm) { - return !!NVME_ID_NS_FLBAS_EXTENDED(ns->id_ns.flbas); + return !!NVME_ID_NS_FLBAS_EXTENDED(nvm->id_ns.flbas); } void nvme_ns_init_format(NvmeNamespace *ns); From patchwork Tue Sep 14 20:37:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 1528137 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=irrelevant.dk header.i=@irrelevant.dk header.a=rsa-sha256 header.s=fm1 header.b=oBjrCSR6; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=DlknY56y; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4H8FyJ2syfz9sQt for ; Wed, 15 Sep 2021 06:56:40 +1000 (AEST) Received: from localhost ([::1]:45036 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mQFTo-0001xm-Ta for incoming@patchwork.ozlabs.org; Tue, 14 Sep 2021 16:56:37 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55012) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFBy-0005hh-Hu; Tue, 14 Sep 2021 16:38:12 -0400 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:55943) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFBp-0004lv-2n; Tue, 14 Sep 2021 16:38:09 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.west.internal (Postfix) with ESMTP id 2318332009DF; Tue, 14 Sep 2021 16:37:59 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Tue, 14 Sep 2021 16:37:59 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=WSCxnEqoyrAOl owEI8fwelvaUOW2W1id0hoo4OaVAwI=; b=oBjrCSR6E9pYE42DYJoVDqJzOor1q fndW7SmcSIyS6R4fQSEuFOo8FIhU5ASGUNoUoe27MWjUx2qA4tCECLHCU7uR4EOd sEmnclGSHlcDTNyGqm4DOmNSvH+BhnHqNdCs3Ck4RgDmccs1g3GBW5KIDZvMh7tQ p+CzR8iZ707aTNcgTRM0KNOoCIT7XkKiOl6iZ/3Fim927ebBAzpfcTmhAK/HzXlJ lLDX0xAiT38XPDVB/uVdei8NJUHCuD97/gN5mzV8BXfxIpirfq4061EooQQGZg6X c8AqgtEWi5gwvTgtT6sMn0/ywyIuNcwH8k6K9gwvLkILSZEr7Uj3i5Edw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=WSCxnEqoyrAOlowEI8fwelvaUOW2W1id0hoo4OaVAwI=; b=DlknY56y V/hRh9J3Jq7TRLgCjQBTBc3L3MV6JaucYfBUZ8gtIoBQUZ8jSOuO4Ngkvq+bmx+z 5Z+PKww1sgrGwpEgK0PdvZbNaG80EbIdsL5FuDPl0xwKZquIeLRmqcm1lrMbRSf1 1a6jogocOzfCZqoGq1/ayhU4y3UbPdWVrk/5pykQxGx26GhCCLwLMprHDYeg8ue/ 2Ojxj2BnkF7FGv6fzkivfLFcCNTTxWrsivdiAm2PuhsC4lVz/IKJXgLE/tEweCZB DTVBUKATuWp7hPoed9aXHYbSCVaJX38yGoP+oMwQK1A0ABHDp7W79tX3umARvbHb Zp17ct8LAF8yJQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudegledgudegjecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghu shculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrth htvghrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffev gfeknecuvehluhhsthgvrhfuihiivgepvdenucfrrghrrghmpehmrghilhhfrhhomhepih htshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 14 Sep 2021 16:37:57 -0400 (EDT) From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH RFC 05/13] hw/nvme: move BlockBackend to NvmeNamespaceNvm Date: Tue, 14 Sep 2021 22:37:29 +0200 Message-Id: <20210914203737.182571-6-its@irrelevant.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210914203737.182571-1-its@irrelevant.dk> References: <20210914203737.182571-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.19; envelope-from=its@irrelevant.dk; helo=wout3-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_PASS=-0.001, T_SPF_TEMPERROR=0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , qemu-block@nongnu.org, Klaus Jensen , Markus Armbruster , Klaus Jensen , Hanna Reitz , Stefan Hajnoczi , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 66 +++++++++++++++++++++++++------------------------- hw/nvme/dif.c | 14 +++++------ hw/nvme/nvme.h | 6 +++++ 3 files changed, 46 insertions(+), 40 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index 7f41181aafa1..f05d85075f08 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -1465,7 +1465,7 @@ static int nvme_block_status_all(NvmeNamespace *ns, uint64_t slba, uint32_t nlb, int flags) { NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); - BlockDriverState *bs = blk_bs(ns->blkconf.blk); + BlockDriverState *bs = blk_bs(nvme_blk(ns)); int64_t pnum = 0, bytes = nvme_l2b(nvm, nlb); int64_t offset = nvme_l2b(nvm, slba); @@ -1865,7 +1865,7 @@ void nvme_rw_complete_cb(void *opaque, int ret) { NvmeRequest *req = opaque; NvmeNamespace *ns = req->ns; - BlockBackend *blk = ns->blkconf.blk; + BlockBackend *blk = nvme_blk(ns); BlockAcctCookie *acct = &req->acct; BlockAcctStats *stats = blk_get_stats(blk); @@ -1891,7 +1891,7 @@ static void nvme_rw_cb(void *opaque, int ret) NvmeNamespace *ns = req->ns; NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); - BlockBackend *blk = ns->blkconf.blk; + BlockBackend *blk = nvme_blk(ns); trace_pci_nvme_rw_cb(nvme_cid(req), blk_name(blk)); @@ -1942,7 +1942,7 @@ static void nvme_verify_cb(void *opaque, int ret) NvmeRequest *req = ctx->req; NvmeNamespace *ns = req->ns; NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); - BlockBackend *blk = ns->blkconf.blk; + BlockBackend *blk = nvme_blk(ns); BlockAcctCookie *acct = &req->acct; BlockAcctStats *stats = blk_get_stats(blk); NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; @@ -2000,7 +2000,7 @@ static void nvme_verify_mdata_in_cb(void *opaque, int ret) uint32_t nlb = le16_to_cpu(rw->nlb) + 1; size_t mlen = nvme_m2b(nvm, nlb); uint64_t offset = nvme_moff(nvm, slba); - BlockBackend *blk = ns->blkconf.blk; + BlockBackend *blk = nvme_blk(ns); trace_pci_nvme_verify_mdata_in_cb(nvme_cid(req), blk_name(blk)); @@ -2046,7 +2046,7 @@ static void nvme_compare_mdata_cb(void *opaque, int ret) uint32_t reftag = le32_to_cpu(rw->reftag); struct nvme_compare_ctx *ctx = req->opaque; g_autofree uint8_t *buf = NULL; - BlockBackend *blk = ns->blkconf.blk; + BlockBackend *blk = nvme_blk(ns); BlockAcctCookie *acct = &req->acct; BlockAcctStats *stats = blk_get_stats(blk); uint16_t status = NVME_SUCCESS; @@ -2126,7 +2126,7 @@ static void nvme_compare_data_cb(void *opaque, int ret) NvmeCtrl *n = nvme_ctrl(req); NvmeNamespace *ns = req->ns; NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); - BlockBackend *blk = ns->blkconf.blk; + BlockBackend *blk = nvme_blk(ns); BlockAcctCookie *acct = &req->acct; BlockAcctStats *stats = blk_get_stats(blk); @@ -2272,7 +2272,7 @@ static void nvme_dsm_md_cb(void *opaque, int ret) nvme_dsm_cb(iocb, 0); } - iocb->aiocb = blk_aio_pwrite_zeroes(ns->blkconf.blk, nvme_moff(nvm, slba), + iocb->aiocb = blk_aio_pwrite_zeroes(nvme_blk(ns), nvme_moff(nvm, slba), nvme_m2b(nvm, nlb), BDRV_REQ_MAY_UNMAP, nvme_dsm_cb, iocb); return; @@ -2320,7 +2320,7 @@ next: goto next; } - iocb->aiocb = blk_aio_pdiscard(ns->blkconf.blk, nvme_l2b(nvm, slba), + iocb->aiocb = blk_aio_pdiscard(nvme_blk(ns), nvme_l2b(nvm, slba), nvme_l2b(nvm, nlb), nvme_dsm_md_cb, iocb); return; @@ -2341,7 +2341,7 @@ static uint16_t nvme_dsm(NvmeCtrl *n, NvmeRequest *req) trace_pci_nvme_dsm(nr, attr); if (attr & NVME_DSMGMT_AD) { - NvmeDSMAIOCB *iocb = blk_aio_get(&nvme_dsm_aiocb_info, ns->blkconf.blk, + NvmeDSMAIOCB *iocb = blk_aio_get(&nvme_dsm_aiocb_info, nvme_blk(ns), nvme_misc_cb, req); iocb->req = req; @@ -2371,7 +2371,7 @@ static uint16_t nvme_verify(NvmeCtrl *n, NvmeRequest *req) NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns = req->ns; NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); - BlockBackend *blk = ns->blkconf.blk; + BlockBackend *blk = nvme_blk(ns); uint64_t slba = le64_to_cpu(rw->slba); uint32_t nlb = le16_to_cpu(rw->nlb) + 1; size_t len = nvme_l2b(nvm, nlb); @@ -2421,7 +2421,7 @@ static uint16_t nvme_verify(NvmeCtrl *n, NvmeRequest *req) block_acct_start(blk_get_stats(blk), &req->acct, ctx->data.iov.size, BLOCK_ACCT_READ); - req->aiocb = blk_aio_preadv(ns->blkconf.blk, offset, &ctx->data.iov, 0, + req->aiocb = blk_aio_preadv(nvme_blk(ns), offset, &ctx->data.iov, 0, nvme_verify_mdata_in_cb, ctx); return NVME_NO_COMPLETE; } @@ -2472,7 +2472,7 @@ static void nvme_copy_bh(void *opaque) NvmeCopyAIOCB *iocb = opaque; NvmeRequest *req = iocb->req; NvmeNamespace *ns = req->ns; - BlockAcctStats *stats = blk_get_stats(ns->blkconf.blk); + BlockAcctStats *stats = blk_get_stats(nvme_blk(ns)); if (iocb->idx != iocb->nr) { req->cqe.result = cpu_to_le32(iocb->idx); @@ -2555,7 +2555,7 @@ static void nvme_copy_out_cb(void *opaque, int ret) qemu_iovec_reset(&iocb->iov); qemu_iovec_add(&iocb->iov, mbounce, mlen); - iocb->aiocb = blk_aio_pwritev(ns->blkconf.blk, nvme_moff(nvm, iocb->slba), + iocb->aiocb = blk_aio_pwritev(nvme_blk(ns), nvme_moff(nvm, iocb->slba), &iocb->iov, 0, nvme_copy_out_completed_cb, iocb); @@ -2647,7 +2647,7 @@ static void nvme_copy_in_completed_cb(void *opaque, int ret) qemu_iovec_reset(&iocb->iov); qemu_iovec_add(&iocb->iov, iocb->bounce, len); - iocb->aiocb = blk_aio_pwritev(ns->blkconf.blk, nvme_l2b(nvm, iocb->slba), + iocb->aiocb = blk_aio_pwritev(nvme_blk(ns), nvme_l2b(nvm, iocb->slba), &iocb->iov, 0, nvme_copy_out_cb, iocb); return; @@ -2695,7 +2695,7 @@ static void nvme_copy_in_cb(void *opaque, int ret) qemu_iovec_add(&iocb->iov, iocb->bounce + nvme_l2b(nvm, nlb), nvme_m2b(nvm, nlb)); - iocb->aiocb = blk_aio_preadv(ns->blkconf.blk, nvme_moff(nvm, slba), + iocb->aiocb = blk_aio_preadv(nvme_blk(ns), nvme_moff(nvm, slba), &iocb->iov, 0, nvme_copy_in_completed_cb, iocb); return; @@ -2761,7 +2761,7 @@ static void nvme_copy_cb(void *opaque, int ret) qemu_iovec_reset(&iocb->iov); qemu_iovec_add(&iocb->iov, iocb->bounce, len); - iocb->aiocb = blk_aio_preadv(ns->blkconf.blk, nvme_l2b(nvm, slba), + iocb->aiocb = blk_aio_preadv(nvme_blk(ns), nvme_l2b(nvm, slba), &iocb->iov, 0, nvme_copy_in_cb, iocb); return; @@ -2780,7 +2780,7 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *req) NvmeNamespace *ns = req->ns; NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeCopyCmd *copy = (NvmeCopyCmd *)&req->cmd; - NvmeCopyAIOCB *iocb = blk_aio_get(&nvme_copy_aiocb_info, ns->blkconf.blk, + NvmeCopyAIOCB *iocb = blk_aio_get(&nvme_copy_aiocb_info, nvme_blk(ns), nvme_misc_cb, req); uint16_t nr = copy->nr + 1; uint8_t format = copy->control[0] & 0xf; @@ -2847,9 +2847,9 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *req) qemu_iovec_init(&iocb->iov, 1); - block_acct_start(blk_get_stats(ns->blkconf.blk), &iocb->acct.read, 0, + block_acct_start(blk_get_stats(nvme_blk(ns)), &iocb->acct.read, 0, BLOCK_ACCT_READ); - block_acct_start(blk_get_stats(ns->blkconf.blk), &iocb->acct.write, 0, + block_acct_start(blk_get_stats(nvme_blk(ns)), &iocb->acct.write, 0, BLOCK_ACCT_WRITE); req->aiocb = &iocb->common; @@ -2868,7 +2868,7 @@ static uint16_t nvme_compare(NvmeCtrl *n, NvmeRequest *req) NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns = req->ns; NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); - BlockBackend *blk = ns->blkconf.blk; + BlockBackend *blk = nvme_blk(ns); uint64_t slba = le64_to_cpu(rw->slba); uint32_t nlb = le16_to_cpu(rw->nlb) + 1; uint8_t prinfo = NVME_RW_PRINFO(le16_to_cpu(rw->control)); @@ -2971,7 +2971,7 @@ static void nvme_flush_ns_cb(void *opaque, int ret) trace_pci_nvme_flush_ns(iocb->nsid); iocb->ns = NULL; - iocb->aiocb = blk_aio_flush(ns->blkconf.blk, nvme_flush_ns_cb, iocb); + iocb->aiocb = blk_aio_flush(nvme_blk(ns), nvme_flush_ns_cb, iocb); return; } @@ -3073,7 +3073,7 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *req) uint64_t data_size = nvme_l2b(nvm, nlb); uint64_t mapped_size = data_size; uint64_t data_offset; - BlockBackend *blk = ns->blkconf.blk; + BlockBackend *blk = nvme_blk(ns); uint16_t status; if (nvme_ns_ext(nvm)) { @@ -3151,7 +3151,7 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeRequest *req, bool append, uint64_t data_offset; NvmeZone *zone; NvmeZonedResult *res = (NvmeZonedResult *)&req->cqe; - BlockBackend *blk = ns->blkconf.blk; + BlockBackend *blk = nvme_blk(ns); uint16_t status; if (nvme_ns_ext(nvm)) { @@ -3542,7 +3542,7 @@ static void nvme_zone_reset_epilogue_cb(void *opaque, int ret) moff = nvme_moff(nvm, iocb->zone->d.zslba); count = nvme_m2b(nvm, zoned->zone_size); - iocb->aiocb = blk_aio_pwrite_zeroes(ns->blkconf.blk, moff, count, + iocb->aiocb = blk_aio_pwrite_zeroes(nvme_blk(ns), moff, count, BDRV_REQ_MAY_UNMAP, nvme_zone_reset_cb, iocb); return; @@ -3593,7 +3593,7 @@ static void nvme_zone_reset_cb(void *opaque, int ret) trace_pci_nvme_zns_zone_reset(zone->d.zslba); - iocb->aiocb = blk_aio_pwrite_zeroes(ns->blkconf.blk, + iocb->aiocb = blk_aio_pwrite_zeroes(nvme_blk(ns), nvme_l2b(nvm, zone->d.zslba), nvme_l2b(nvm, zoned->zone_size), BDRV_REQ_MAY_UNMAP, @@ -3673,7 +3673,7 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeRequest *req) case NVME_ZONE_ACTION_RESET: trace_pci_nvme_reset_zone(slba, zone_idx, all); - iocb = blk_aio_get(&nvme_zone_reset_aiocb_info, ns->blkconf.blk, + iocb = blk_aio_get(&nvme_zone_reset_aiocb_info, nvme_blk(ns), nvme_misc_cb, req); iocb->req = req; @@ -4072,7 +4072,7 @@ struct nvme_stats { static void nvme_set_blk_stats(NvmeNamespace *ns, struct nvme_stats *stats) { - BlockAcctStats *s = blk_get_stats(ns->blkconf.blk); + BlockAcctStats *s = blk_get_stats(nvme_blk(ns)); stats->units_read += s->nr_bytes[BLOCK_ACCT_READ] >> BDRV_SECTOR_BITS; stats->units_written += s->nr_bytes[BLOCK_ACCT_WRITE] >> BDRV_SECTOR_BITS; @@ -4938,7 +4938,7 @@ static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeRequest *req) continue; } - result = blk_enable_write_cache(ns->blkconf.blk); + result = blk_enable_write_cache(nvme_blk(ns)); if (result) { break; } @@ -5110,11 +5110,11 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req) continue; } - if (!(dw11 & 0x1) && blk_enable_write_cache(ns->blkconf.blk)) { - blk_flush(ns->blkconf.blk); + if (!(dw11 & 0x1) && blk_enable_write_cache(nvme_blk(ns))) { + blk_flush(nvme_blk(ns)); } - blk_set_enable_write_cache(ns->blkconf.blk, dw11 & 1); + blk_set_enable_write_cache(nvme_blk(ns), dw11 & 1); } break; @@ -5364,7 +5364,7 @@ static void nvme_format_ns_cb(void *opaque, int ret) if (iocb->offset < nvm->size) { bytes = MIN(BDRV_REQUEST_MAX_BYTES, nvm->size - iocb->offset); - iocb->aiocb = blk_aio_pwrite_zeroes(ns->blkconf.blk, iocb->offset, + iocb->aiocb = blk_aio_pwrite_zeroes(nvme_blk(ns), iocb->offset, bytes, BDRV_REQ_MAY_UNMAP, nvme_format_ns_cb, iocb); diff --git a/hw/nvme/dif.c b/hw/nvme/dif.c index 26c7412eb523..1b8f9ba2fb44 100644 --- a/hw/nvme/dif.c +++ b/hw/nvme/dif.c @@ -171,7 +171,7 @@ uint16_t nvme_dif_mangle_mdata(NvmeNamespace *ns, uint8_t *mbuf, size_t mlen, uint64_t slba) { NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); - BlockBackend *blk = ns->blkconf.blk; + BlockBackend *blk = nvme_blk(ns); BlockDriverState *bs = blk_bs(blk); int64_t moffset = 0, offset = nvme_l2b(nvm, slba); @@ -227,7 +227,7 @@ static void nvme_dif_rw_cb(void *opaque, int ret) NvmeBounceContext *ctx = opaque; NvmeRequest *req = ctx->req; NvmeNamespace *ns = req->ns; - BlockBackend *blk = ns->blkconf.blk; + BlockBackend *blk = nvme_blk(ns); trace_pci_nvme_dif_rw_cb(nvme_cid(req), blk_name(blk)); @@ -311,7 +311,7 @@ static void nvme_dif_rw_mdata_in_cb(void *opaque, int ret) uint32_t nlb = le16_to_cpu(rw->nlb) + 1; size_t mlen = nvme_m2b(nvm, nlb); uint64_t offset = nvme_moff(nvm, slba); - BlockBackend *blk = ns->blkconf.blk; + BlockBackend *blk = nvme_blk(ns); trace_pci_nvme_dif_rw_mdata_in_cb(nvme_cid(req), blk_name(blk)); @@ -341,7 +341,7 @@ static void nvme_dif_rw_mdata_out_cb(void *opaque, int ret) NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; uint64_t slba = le64_to_cpu(rw->slba); uint64_t offset = nvme_moff(nvm, slba); - BlockBackend *blk = ns->blkconf.blk; + BlockBackend *blk = nvme_blk(ns); trace_pci_nvme_dif_rw_mdata_out_cb(nvme_cid(req), blk_name(blk)); @@ -362,7 +362,7 @@ uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req) NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns = req->ns; NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); - BlockBackend *blk = ns->blkconf.blk; + BlockBackend *blk = nvme_blk(ns); bool wrz = rw->opcode == NVME_CMD_WRITE_ZEROES; uint32_t nlb = le16_to_cpu(rw->nlb) + 1; uint64_t slba = le64_to_cpu(rw->slba); @@ -451,7 +451,7 @@ uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req) block_acct_start(blk_get_stats(blk), &req->acct, ctx->data.iov.size, BLOCK_ACCT_READ); - req->aiocb = blk_aio_preadv(ns->blkconf.blk, offset, &ctx->data.iov, 0, + req->aiocb = blk_aio_preadv(nvme_blk(ns), offset, &ctx->data.iov, 0, nvme_dif_rw_mdata_in_cb, ctx); return NVME_NO_COMPLETE; } @@ -497,7 +497,7 @@ uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req) block_acct_start(blk_get_stats(blk), &req->acct, ctx->data.iov.size, BLOCK_ACCT_WRITE); - req->aiocb = blk_aio_pwritev(ns->blkconf.blk, offset, &ctx->data.iov, 0, + req->aiocb = blk_aio_pwritev(nvme_blk(ns), offset, &ctx->data.iov, 0, nvme_dif_rw_mdata_out_cb, ctx); return NVME_NO_COMPLETE; diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index c5e08cf9e1c1..525bfd0ca831 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -155,6 +155,7 @@ enum { typedef struct NvmeNamespaceNvm { NvmeIdNs id_ns; + BlockBackend *blk; int64_t size; int64_t moff; @@ -193,6 +194,11 @@ typedef struct NvmeNamespace { #define NVME_NAMESPACE_NVM(ns) (&(ns)->nvm) #define NVME_NAMESPACE_ZONED(ns) (&(ns)->zoned) +static inline BlockBackend *nvme_blk(NvmeNamespace *ns) +{ + return NVME_NAMESPACE_NVM(ns)->blk; +} + static inline uint32_t nvme_nsid(NvmeNamespace *ns) { if (ns) { From patchwork Tue Sep 14 20:37:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 1528130 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=irrelevant.dk header.i=@irrelevant.dk header.a=rsa-sha256 header.s=fm1 header.b=LjLMtNnk; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=p5Ed8p2r; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4H8FdT6Gk6z9sRN for ; Wed, 15 Sep 2021 06:42:05 +1000 (AEST) Received: from localhost ([::1]:56574 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mQFFj-00073i-G3 for incoming@patchwork.ozlabs.org; Tue, 14 Sep 2021 16:42:03 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55008) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFBx-0005fl-IA; Tue, 14 Sep 2021 16:38:09 -0400 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:55093) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFBs-0004nL-Cq; Tue, 14 Sep 2021 16:38:05 -0400 Received: from compute2.internal (compute2.nyi.internal [10.202.2.42]) by mailout.west.internal (Postfix) with ESMTP id 0705D32009FF; Tue, 14 Sep 2021 16:38:01 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute2.internal (MEProxy); Tue, 14 Sep 2021 16:38:02 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=38nnrBM97XPIY VmHe5MfH0sS2KBUeMhYG5oVjJyrNKc=; b=LjLMtNnkSyCMa8LzgPc19hOBztOdk GSjuOg2XNy9iRcNERNed1vm08D4yOcfC7WdIx2g2t3YkELDNSy107QETJ73Bqb0H 2yw8dS7VrweWfEkmMcC78ViuI7NEguhrtv+yWx0QbEuUZ04w3cLfzFfktFop2Uir JezjdhV57S5qk1AhWoK4lgoirdQUY6NWWjwAsd+cfi9U4mBI1SqNGvtEU6mrI2go l1bPraM5w+Jv1oeNvei1Q+KQ6ZSX06Lm1h5yyxlquQP7q7CO5QgNrWb3fYoQu0nm ZPi4aKuRn5rwWVFzdDru0zsdjvX+wKiAcO3pIfwxFuxS0X4vdWkldlLAg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=38nnrBM97XPIYVmHe5MfH0sS2KBUeMhYG5oVjJyrNKc=; b=p5Ed8p2r /147Yb2z48NRNTzHbdPjfN/tRuy6n38RxQnG5ljpJ4vYQDl/xzvCIRo7wfsID6H0 5mFyHHwcXaaC0k9OyBUYE0QJhzn+cDY6SQ3xDKvTToF+nAT8l2HhBYe+wph/hKtp 7wIMsmwuUu2PPXSUNXqRE0fxbkZxHR6VZg7BtGSDlRbz7uwYw5I892dsfkBLsUP4 f6YgxOrEmapBwl7mGCgxbBKdaweKSucfyMFknYymL1alb+mwGZJofzNLaZX337uW ZUfcVz4WQ5zBsRMYSCALaaKwM8ScnDWJKLJ1foC2KtX0FTwVwGUXRI+OuL7HGN4A fKLElZdx/abLqA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudegledgudegkecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghu shculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrth htvghrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffev gfeknecuvehluhhsthgvrhfuihiivgepudenucfrrghrrghmpehmrghilhhfrhhomhepih htshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 14 Sep 2021 16:38:00 -0400 (EDT) From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH RFC 06/13] nvme: add structured type for nguid Date: Tue, 14 Sep 2021 22:37:30 +0200 Message-Id: <20210914203737.182571-7-its@irrelevant.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210914203737.182571-1-its@irrelevant.dk> References: <20210914203737.182571-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.19; envelope-from=its@irrelevant.dk; helo=wout3-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , qemu-block@nongnu.org, Klaus Jensen , Markus Armbruster , Klaus Jensen , Hanna Reitz , Stefan Hajnoczi , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen Add a structured type for NGUID. Signed-off-by: Klaus Jensen --- include/block/nvme.h | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/include/block/nvme.h b/include/block/nvme.h index 2bcabe561589..f41464ee19bd 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -1269,6 +1269,11 @@ typedef struct QEMU_PACKED NvmeLBAFE { #define NVME_NSID_BROADCAST 0xffffffff +typedef struct QEMU_PACKED NvmeNGUID { + uint8_t vspexid[8]; + uint64_t eui; +} NvmeNGUID; + typedef struct QEMU_PACKED NvmeIdNs { uint64_t nsze; uint64_t ncap; @@ -1300,7 +1305,7 @@ typedef struct QEMU_PACKED NvmeIdNs { uint32_t mcl; uint8_t msrc; uint8_t rsvd81[23]; - uint8_t nguid[16]; + NvmeNGUID nguid; uint64_t eui64; NvmeLBAF lbaf[16]; uint8_t rsvd192[192]; From patchwork Tue Sep 14 20:37:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 1528133 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=irrelevant.dk header.i=@irrelevant.dk header.a=rsa-sha256 header.s=fm1 header.b=iaLDKHx3; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=hoos0z8Z; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4H8Fnk5KmBz9sRN for ; Wed, 15 Sep 2021 06:49:14 +1000 (AEST) Received: from localhost ([::1]:36872 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mQFMe-0004mq-Cc for incoming@patchwork.ozlabs.org; Tue, 14 Sep 2021 16:49:12 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55042) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFC2-0005lX-94; Tue, 14 Sep 2021 16:38:14 -0400 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:34101) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFBy-0004q8-M9; Tue, 14 Sep 2021 16:38:14 -0400 Received: from compute6.internal (compute6.nyi.internal [10.202.2.46]) by mailout.west.internal (Postfix) with ESMTP id B58E532009E0; Tue, 14 Sep 2021 16:38:05 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute6.internal (MEProxy); Tue, 14 Sep 2021 16:38:06 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=jq9RQoG8ltUtD HuGGelTxw1+wvJRYlWGKhGVu9ljr3s=; b=iaLDKHx30usfxon7+QkI7qAaEDd9I Ii1uEgf8oZNPjOjW7WnTU3rNw0Z+HLTTnGX0pID0X7bpKl0c4IMNN2/Db76ZL4u7 0oXVoMGfA5FiRvMI8YkaQzntJT3b80yoE7NarEZfYDBpDHwupYYlwFPao3QO9nei YxaxomZ1nuv1IGWBJT6OvY5QOKzfp2RfSGnpeA2K3oiCCnnzWx9tROQAUzjXAcRs Dkw3rl4k8GFRRWG8MXQ4p8X1xZQEJwlsaSITKqmZ6QDJbVT7O0AqVhduc4mUjQss xGczSr3QnyTiqucDwVtmJIQ6htQKXc136IBS8PJbm+mR8PFqnrX3RHUrQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=jq9RQoG8ltUtDHuGGelTxw1+wvJRYlWGKhGVu9ljr3s=; b=hoos0z8Z OxG03/J+c6cB154pnTVBUYKsaEEDL15bBcXYPr0i5rhz0k9L5Nqf1EWMe/nU+JmZ dG4+XyAnUwafHqUigHVqt31nbjSrBuypavTYR4T8cdDZpc/ReTbeWr5MivonbGCi pjjG/TCGAxDp/dt11VY7wqb/M8rOgEf3b5EXV0p3vSJtfC7ekStL9RsSdZ9O0+KB 8TVY8+/bVUlAY0n+l8IBT0CVRxThvqxtisR4jmo0LMKWlRXFiWIrIGwY07hixZwP ugcOMec0f3eupufPIM6bajigyFVcZWO7eSwaRedSaOkT+zrBihbRwSdqa4gRwlEv Sc+L6tG0z/E4tA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudegledgudegkecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghu shculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrth htvghrnhepieduheefjeetgeeufeefkefhlefgkeehteffgfetjeegkeeugfdtudejuedu geeinecuffhomhgrihhnpehuuhhiugdruggrthgrnecuvehluhhsthgvrhfuihiivgeptd enucfrrghrrghmpehmrghilhhfrhhomhepihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 14 Sep 2021 16:38:02 -0400 (EDT) From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH RFC 07/13] hw/nvme: hoist qdev state from namespace Date: Tue, 14 Sep 2021 22:37:31 +0200 Message-Id: <20210914203737.182571-8-its@irrelevant.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210914203737.182571-1-its@irrelevant.dk> References: <20210914203737.182571-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.19; envelope-from=its@irrelevant.dk; helo=wout3-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , qemu-block@nongnu.org, Klaus Jensen , Markus Armbruster , Klaus Jensen , Hanna Reitz , Stefan Hajnoczi , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 32 +++--- hw/nvme/ns.c | 265 ++++++++++++++++++++++++++----------------------- hw/nvme/nvme.h | 45 ++++++--- 3 files changed, 187 insertions(+), 155 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index f05d85075f08..966fba605d79 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -4616,10 +4616,10 @@ static uint16_t nvme_identify_nslist(NvmeCtrl *n, NvmeRequest *req, continue; } } - if (ns->params.nsid <= min_nsid) { + if (ns->nsid <= min_nsid) { continue; } - list_ptr[j++] = cpu_to_le32(ns->params.nsid); + list_ptr[j++] = cpu_to_le32(ns->nsid); if (j == data_len / sizeof(uint32_t)) { break; } @@ -4664,10 +4664,10 @@ static uint16_t nvme_identify_nslist_csi(NvmeCtrl *n, NvmeRequest *req, continue; } } - if (ns->params.nsid <= min_nsid || c->csi != ns->csi) { + if (ns->nsid <= min_nsid || c->csi != ns->csi) { continue; } - list_ptr[j++] = cpu_to_le32(ns->params.nsid); + list_ptr[j++] = cpu_to_le32(ns->nsid); if (j == data_len / sizeof(uint32_t)) { break; } @@ -4714,14 +4714,14 @@ static uint16_t nvme_identify_ns_descr_list(NvmeCtrl *n, NvmeRequest *req) */ uuid.hdr.nidt = NVME_NIDT_UUID; uuid.hdr.nidl = NVME_NIDL_UUID; - memcpy(uuid.v, ns->params.uuid.data, NVME_NIDL_UUID); + memcpy(uuid.v, ns->uuid.data, NVME_NIDL_UUID); memcpy(pos, &uuid, sizeof(uuid)); pos += sizeof(uuid); - if (ns->params.eui64) { + if (ns->eui64.v) { eui64.hdr.nidt = NVME_NIDT_EUI64; eui64.hdr.nidl = NVME_NIDL_EUI64; - eui64.v = cpu_to_be64(ns->params.eui64); + eui64.v = cpu_to_be64(ns->eui64.v); memcpy(pos, &eui64, sizeof(eui64)); pos += sizeof(eui64); } @@ -5260,7 +5260,7 @@ static uint16_t nvme_ns_attachment(NvmeCtrl *n, NvmeRequest *req) return NVME_NS_ALREADY_ATTACHED | NVME_DNR; } - if (ns->attached && !ns->params.shared) { + if (ns->attached && !(ns->flags & NVME_NS_SHARED)) { return NVME_NS_PRIVATE | NVME_DNR; } @@ -5338,12 +5338,12 @@ static void nvme_format_set(NvmeNamespace *ns, NvmeCmd *cmd) uint8_t mset = (dw10 >> 4) & 0x1; uint8_t pil = (dw10 >> 8) & 0x1; - trace_pci_nvme_format_set(ns->params.nsid, lbaf, mset, pi, pil); + trace_pci_nvme_format_set(ns->nsid, lbaf, mset, pi, pil); nvm->id_ns.dps = (pil << 3) | pi; nvm->id_ns.flbas = lbaf | (mset << 4); - nvme_ns_init_format(ns); + nvme_ns_nvm_init_format(nvm); } static void nvme_format_ns_cb(void *opaque, int ret) @@ -6544,7 +6544,7 @@ static int nvme_init_subsys(NvmeCtrl *n, Error **errp) void nvme_attach_ns(NvmeCtrl *n, NvmeNamespace *ns) { NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); - uint32_t nsid = ns->params.nsid; + uint32_t nsid = ns->nsid; assert(nsid && nsid <= NVME_MAX_NAMESPACES); n->namespaces[nsid] = ns; @@ -6557,7 +6557,6 @@ void nvme_attach_ns(NvmeCtrl *n, NvmeNamespace *ns) static void nvme_realize(PCIDevice *pci_dev, Error **errp) { NvmeCtrl *n = NVME(pci_dev); - NvmeNamespace *ns; Error *local_err = NULL; nvme_check_constraints(n, &local_err); @@ -6582,12 +6581,11 @@ static void nvme_realize(PCIDevice *pci_dev, Error **errp) /* setup a namespace if the controller drive property was given */ if (n->namespace.blkconf.blk) { - ns = &n->namespace; - ns->params.nsid = 1; + NvmeNamespaceDevice *nsdev = &n->namespace; + NvmeNamespace *ns = &nsdev->ns; + ns->nsid = 1; - if (nvme_ns_setup(ns, errp)) { - return; - } + nvme_ns_init(ns); nvme_attach_ns(n, ns); } diff --git a/hw/nvme/ns.c b/hw/nvme/ns.c index 0e231102c475..b411b184c253 100644 --- a/hw/nvme/ns.c +++ b/hw/nvme/ns.c @@ -26,9 +26,8 @@ #define MIN_DISCARD_GRANULARITY (4 * KiB) -void nvme_ns_init_format(NvmeNamespace *ns) +void nvme_ns_nvm_init_format(NvmeNamespaceNvm *nvm) { - NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeIdNs *id_ns = &nvm->id_ns; BlockDriverInfo bdi; int npdg, nlbas, ret; @@ -48,7 +47,7 @@ void nvme_ns_init_format(NvmeNamespace *ns) npdg = nvm->discard_granularity / nvm->lbasz; - ret = bdrv_get_info(blk_bs(ns->blkconf.blk), &bdi); + ret = bdrv_get_info(blk_bs(nvm->blk), &bdi); if (ret >= 0 && bdi.cluster_size > nvm->discard_granularity) { npdg = bdi.cluster_size / nvm->lbasz; } @@ -56,53 +55,39 @@ void nvme_ns_init_format(NvmeNamespace *ns) id_ns->npda = id_ns->npdg = npdg - 1; } -static int nvme_ns_init(NvmeNamespace *ns, Error **errp) +void nvme_ns_init(NvmeNamespace *ns) { NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); - static uint64_t ns_count; NvmeIdNs *id_ns = &nvm->id_ns; uint8_t ds; uint16_t ms; int i; - ns->csi = NVME_CSI_NVM; - ns->status = 0x0; - - nvm->id_ns.dlfeat = 0x1; + id_ns->dlfeat = 0x1; /* support DULBE and I/O optimization fields */ id_ns->nsfeat |= (0x4 | 0x10); - if (ns->params.shared) { + if (ns->flags & NVME_NS_SHARED) { id_ns->nmic |= NVME_NMIC_NS_SHARED; } - /* Substitute a missing EUI-64 by an autogenerated one */ - ++ns_count; - if (!ns->params.eui64 && ns->params.eui64_default) { - ns->params.eui64 = ns_count + NVME_EUI64_DEFAULT; - } - /* simple copy */ id_ns->mssrl = cpu_to_le16(nvm->mssrl); id_ns->mcl = cpu_to_le32(nvm->mcl); id_ns->msrc = nvm->msrc; - id_ns->eui64 = cpu_to_be64(ns->params.eui64); + id_ns->eui64 = cpu_to_be64(ns->eui64.v); ds = 31 - clz32(nvm->lbasz); - ms = ns->params.ms; + ms = nvm->lbaf.ms; id_ns->mc = NVME_ID_NS_MC_EXTENDED | NVME_ID_NS_MC_SEPARATE; - if (ms && ns->params.mset) { + if (ms && nvm->flags & NVME_NS_NVM_EXTENDED_LBA) { id_ns->flbas |= NVME_ID_NS_FLBAS_EXTENDED; } id_ns->dpc = 0x1f; - id_ns->dps = ns->params.pi; - if (ns->params.pi && ns->params.pil) { - id_ns->dps |= NVME_ID_NS_DPS_FIRST_EIGHT; - } static const NvmeLBAF lbaf[16] = { [0] = { .ds = 9 }, @@ -135,59 +120,63 @@ static int nvme_ns_init(NvmeNamespace *ns, Error **errp) id_ns->flbas |= id_ns->nlbaf; lbaf_found: - nvme_ns_init_format(ns); - - return 0; + nvme_ns_nvm_init_format(nvm); } -static int nvme_ns_init_blk(NvmeNamespace *ns, Error **errp) +static int nvme_nsdev_init_blk(NvmeNamespaceDevice *nsdev, + Error **errp) { + NvmeNamespace *ns = &nsdev->ns; NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); + BlockConf *blkconf = &nsdev->blkconf; bool read_only; - if (!blkconf_blocksizes(&ns->blkconf, errp)) { + if (!blkconf_blocksizes(blkconf, errp)) { return -1; } - read_only = !blk_supports_write_perm(ns->blkconf.blk); - if (!blkconf_apply_backend_options(&ns->blkconf, read_only, false, errp)) { + read_only = !blk_supports_write_perm(blkconf->blk); + if (!blkconf_apply_backend_options(blkconf, read_only, false, errp)) { return -1; } - if (ns->blkconf.discard_granularity == -1) { - ns->blkconf.discard_granularity = - MAX(ns->blkconf.logical_block_size, MIN_DISCARD_GRANULARITY); + if (blkconf->discard_granularity == -1) { + blkconf->discard_granularity = + MAX(blkconf->logical_block_size, MIN_DISCARD_GRANULARITY); } - nvm->lbasz = ns->blkconf.logical_block_size; - nvm->discard_granularity = ns->blkconf.discard_granularity; + nvm->lbasz = blkconf->logical_block_size; + nvm->discard_granularity = blkconf->discard_granularity; nvm->lbaf.ds = 31 - clz32(nvm->lbasz); - nvm->lbaf.ms = ns->params.ms; + nvm->lbaf.ms = nsdev->params.ms; + nvm->blk = blkconf->blk; - nvm->size = blk_getlength(ns->blkconf.blk); + nvm->size = blk_getlength(nvm->blk); if (nvm->size < 0) { - error_setg_errno(errp, -nvm->size, "could not get blockdev size"); + error_setg_errno(errp, -(nvm->size), "could not get blockdev size"); return -1; } return 0; } -static int nvme_ns_zoned_check_calc_geometry(NvmeNamespace *ns, Error **errp) +static int nvme_nsdev_zoned_check_calc_geometry(NvmeNamespaceDevice *nsdev, + Error **errp) { + NvmeNamespace *ns = &nsdev->ns; NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); uint64_t zone_size, zone_cap; /* Make sure that the values of ZNS properties are sane */ - if (ns->params.zone_size_bs) { - zone_size = ns->params.zone_size_bs; + if (nsdev->params.zone_size_bs) { + zone_size = nsdev->params.zone_size_bs; } else { zone_size = NVME_DEFAULT_ZONE_SIZE; } - if (ns->params.zone_cap_bs) { - zone_cap = ns->params.zone_cap_bs; + if (nsdev->params.zone_cap_bs) { + zone_cap = nsdev->params.zone_cap_bs; } else { zone_cap = zone_size; } @@ -359,46 +348,47 @@ static void nvme_ns_zoned_shutdown(NvmeNamespaceZoned *zoned) assert(zoned->nr_open_zones == 0); } -static int nvme_ns_check_constraints(NvmeNamespace *ns, Error **errp) +static int nvme_nsdev_check_constraints(NvmeNamespaceDevice *nsdev, + Error **errp) { - if (!ns->blkconf.blk) { + if (!nsdev->blkconf.blk) { error_setg(errp, "block backend not configured"); return -1; } - if (ns->params.pi && ns->params.ms < 8) { + if (nsdev->params.pi && nsdev->params.ms < 8) { error_setg(errp, "at least 8 bytes of metadata required to enable " "protection information"); return -1; } - if (ns->params.nsid > NVME_MAX_NAMESPACES) { + if (nsdev->params.nsid > NVME_MAX_NAMESPACES) { error_setg(errp, "invalid namespace id (must be between 0 and %d)", NVME_MAX_NAMESPACES); return -1; } - if (ns->params.zoned) { - if (ns->params.max_active_zones) { - if (ns->params.max_open_zones > ns->params.max_active_zones) { + if (nsdev->params.zoned) { + if (nsdev->params.max_active_zones) { + if (nsdev->params.max_open_zones > nsdev->params.max_active_zones) { error_setg(errp, "max_open_zones (%u) exceeds " - "max_active_zones (%u)", ns->params.max_open_zones, - ns->params.max_active_zones); + "max_active_zones (%u)", nsdev->params.max_open_zones, + nsdev->params.max_active_zones); return -1; } - if (!ns->params.max_open_zones) { - ns->params.max_open_zones = ns->params.max_active_zones; + if (!nsdev->params.max_open_zones) { + nsdev->params.max_open_zones = nsdev->params.max_active_zones; } } - if (ns->params.zd_extension_size) { - if (ns->params.zd_extension_size & 0x3f) { + if (nsdev->params.zd_extension_size) { + if (nsdev->params.zd_extension_size & 0x3f) { error_setg(errp, "zone descriptor extension size must be a " "multiple of 64B"); return -1; } - if ((ns->params.zd_extension_size >> 6) > 0xff) { + if ((nsdev->params.zd_extension_size >> 6) > 0xff) { error_setg(errp, "zone descriptor extension size is too large"); return -1; @@ -409,35 +399,57 @@ static int nvme_ns_check_constraints(NvmeNamespace *ns, Error **errp) return 0; } -int nvme_ns_setup(NvmeNamespace *ns, Error **errp) +static int nvme_nsdev_setup(NvmeNamespaceDevice *nsdev, Error **errp) { - if (nvme_ns_check_constraints(ns, errp)) { + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(&nsdev->ns); + static uint64_t ns_count; + + if (nvme_nsdev_check_constraints(nsdev, errp)) { return -1; } - if (nvme_ns_init_blk(ns, errp)) { - return -1; + if (nsdev->params.shared) { + nsdev->ns.flags |= NVME_NS_SHARED; } - if (nvme_ns_init(ns, errp)) { - return -1; - } - if (ns->params.zoned) { - NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); + nsdev->ns.nsid = nsdev->params.nsid; + memcpy(&nsdev->ns.uuid, &nsdev->params.uuid, sizeof(nsdev->ns.uuid)); - if (nvme_ns_zoned_check_calc_geometry(ns, errp) != 0) { + if (nsdev->params.eui64) { + stq_be_p(&nsdev->ns.eui64.v, nsdev->params.eui64); + } + + /* Substitute a missing EUI-64 by an autogenerated one */ + ++ns_count; + if (!nsdev->ns.eui64.v && nsdev->params.eui64_default) { + nsdev->ns.eui64.v = ns_count + NVME_EUI64_DEFAULT; + } + + nvm->id_ns.dps = nsdev->params.pi; + if (nsdev->params.pi && nsdev->params.pil) { + nvm->id_ns.dps |= NVME_ID_NS_DPS_FIRST_EIGHT; + } + + nsdev->ns.csi = NVME_CSI_NVM; + + nvme_ns_init(&nsdev->ns); + + if (nsdev->params.zoned) { + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(&nsdev->ns); + + if (nvme_nsdev_zoned_check_calc_geometry(nsdev, errp) != 0) { return -1; } /* copy device parameters */ - zoned->zd_extension_size = ns->params.zd_extension_size; - zoned->max_open_zones = ns->params.max_open_zones; - zoned->max_active_zones = ns->params.max_open_zones; - if (ns->params.cross_zone_read) { + zoned->zd_extension_size = nsdev->params.zd_extension_size; + zoned->max_open_zones = nsdev->params.max_open_zones; + zoned->max_active_zones = nsdev->params.max_open_zones; + if (nsdev->params.cross_zone_read) { zoned->flags |= NVME_NS_ZONED_CROSS_READ; } - nvme_ns_zoned_init(ns); + nvme_ns_zoned_init(&nsdev->ns); } return 0; @@ -445,12 +457,12 @@ int nvme_ns_setup(NvmeNamespace *ns, Error **errp) void nvme_ns_drain(NvmeNamespace *ns) { - blk_drain(ns->blkconf.blk); + blk_drain(nvme_blk(ns)); } void nvme_ns_shutdown(NvmeNamespace *ns) { - blk_flush(ns->blkconf.blk); + blk_flush(nvme_blk(ns)); if (nvme_ns_zoned(ns)) { nvme_ns_zoned_shutdown(NVME_NAMESPACE_ZONED(ns)); } @@ -466,32 +478,34 @@ void nvme_ns_cleanup(NvmeNamespace *ns) } } -static void nvme_ns_unrealize(DeviceState *dev) +static void nvme_nsdev_unrealize(DeviceState *dev) { - NvmeNamespace *ns = NVME_NS(dev); + NvmeNamespaceDevice *nsdev = NVME_NAMESPACE_DEVICE(dev); + NvmeNamespace *ns = &nsdev->ns; nvme_ns_drain(ns); nvme_ns_shutdown(ns); nvme_ns_cleanup(ns); } -static void nvme_ns_realize(DeviceState *dev, Error **errp) +static void nvme_nsdev_realize(DeviceState *dev, Error **errp) { - NvmeNamespace *ns = NVME_NS(dev); + NvmeNamespaceDevice *nsdev = NVME_NAMESPACE_DEVICE(dev); + NvmeNamespace *ns = &nsdev->ns; BusState *s = qdev_get_parent_bus(dev); NvmeCtrl *n = NVME(s->parent); NvmeSubsystem *subsys = n->subsys; - uint32_t nsid = ns->params.nsid; + uint32_t nsid = nsdev->params.nsid; int i; if (!n->subsys) { - if (ns->params.detached) { + if (nsdev->params.detached) { error_setg(errp, "detached requires that the nvme device is " "linked to an nvme-subsys device"); return; } - if (ns->params.shared) { + if (nsdev->params.shared) { error_setg(errp, "shared requires that the nvme device is " "linked to an nvme-subsys device"); return; @@ -506,7 +520,11 @@ static void nvme_ns_realize(DeviceState *dev, Error **errp) } } - if (nvme_ns_setup(ns, errp)) { + if (nvme_nsdev_init_blk(nsdev, errp)) { + return; + } + + if (nvme_nsdev_setup(nsdev, errp)) { return; } @@ -516,7 +534,7 @@ static void nvme_ns_realize(DeviceState *dev, Error **errp) continue; } - nsid = ns->params.nsid = i; + nsid = ns->nsid = i; break; } @@ -534,11 +552,11 @@ static void nvme_ns_realize(DeviceState *dev, Error **errp) if (subsys) { subsys->namespaces[nsid] = ns; - if (ns->params.detached) { + if (nsdev->params.detached) { return; } - if (ns->params.shared) { + if (nsdev->params.shared) { for (i = 0; i < ARRAY_SIZE(subsys->ctrls); i++) { NvmeCtrl *ctrl = subsys->ctrls[i]; @@ -554,73 +572,74 @@ static void nvme_ns_realize(DeviceState *dev, Error **errp) nvme_attach_ns(n, ns); } -static Property nvme_ns_props[] = { - DEFINE_BLOCK_PROPERTIES(NvmeNamespace, blkconf), - DEFINE_PROP_BOOL("detached", NvmeNamespace, params.detached, false), - DEFINE_PROP_BOOL("shared", NvmeNamespace, params.shared, false), - DEFINE_PROP_UINT32("nsid", NvmeNamespace, params.nsid, 0), - DEFINE_PROP_UUID("uuid", NvmeNamespace, params.uuid), - DEFINE_PROP_UINT64("eui64", NvmeNamespace, params.eui64, 0), - DEFINE_PROP_UINT16("ms", NvmeNamespace, params.ms, 0), - DEFINE_PROP_UINT8("mset", NvmeNamespace, params.mset, 0), - DEFINE_PROP_UINT8("pi", NvmeNamespace, params.pi, 0), - DEFINE_PROP_UINT8("pil", NvmeNamespace, params.pil, 0), - DEFINE_PROP_UINT16("mssrl", NvmeNamespace, params.mssrl, 128), - DEFINE_PROP_UINT32("mcl", NvmeNamespace, params.mcl, 128), - DEFINE_PROP_UINT8("msrc", NvmeNamespace, params.msrc, 127), - DEFINE_PROP_BOOL("zoned", NvmeNamespace, params.zoned, false), - DEFINE_PROP_SIZE("zoned.zone_size", NvmeNamespace, params.zone_size_bs, +static Property nvme_nsdev_props[] = { + DEFINE_BLOCK_PROPERTIES(NvmeNamespaceDevice, blkconf), + DEFINE_PROP_BOOL("detached", NvmeNamespaceDevice, params.detached, false), + DEFINE_PROP_BOOL("shared", NvmeNamespaceDevice, params.shared, false), + DEFINE_PROP_UINT32("nsid", NvmeNamespaceDevice, params.nsid, 0), + DEFINE_PROP_UUID("uuid", NvmeNamespaceDevice, params.uuid), + DEFINE_PROP_UINT64("eui64", NvmeNamespaceDevice, params.eui64, 0), + DEFINE_PROP_UINT16("ms", NvmeNamespaceDevice, params.ms, 0), + DEFINE_PROP_UINT8("mset", NvmeNamespaceDevice, params.mset, 0), + DEFINE_PROP_UINT8("pi", NvmeNamespaceDevice, params.pi, 0), + DEFINE_PROP_UINT8("pil", NvmeNamespaceDevice, params.pil, 0), + DEFINE_PROP_UINT16("mssrl", NvmeNamespaceDevice, params.mssrl, 128), + DEFINE_PROP_UINT32("mcl", NvmeNamespaceDevice, params.mcl, 128), + DEFINE_PROP_UINT8("msrc", NvmeNamespaceDevice, params.msrc, 127), + DEFINE_PROP_BOOL("zoned", NvmeNamespaceDevice, params.zoned, false), + DEFINE_PROP_SIZE("zoned.zone_size", NvmeNamespaceDevice, params.zone_size_bs, NVME_DEFAULT_ZONE_SIZE), - DEFINE_PROP_SIZE("zoned.zone_capacity", NvmeNamespace, params.zone_cap_bs, + DEFINE_PROP_SIZE("zoned.zone_capacity", NvmeNamespaceDevice, params.zone_cap_bs, 0), - DEFINE_PROP_BOOL("zoned.cross_read", NvmeNamespace, + DEFINE_PROP_BOOL("zoned.cross_read", NvmeNamespaceDevice, params.cross_zone_read, false), - DEFINE_PROP_UINT32("zoned.max_active", NvmeNamespace, + DEFINE_PROP_UINT32("zoned.max_active", NvmeNamespaceDevice, params.max_active_zones, 0), - DEFINE_PROP_UINT32("zoned.max_open", NvmeNamespace, + DEFINE_PROP_UINT32("zoned.max_open", NvmeNamespaceDevice, params.max_open_zones, 0), - DEFINE_PROP_UINT32("zoned.descr_ext_size", NvmeNamespace, + DEFINE_PROP_UINT32("zoned.descr_ext_size", NvmeNamespaceDevice, params.zd_extension_size, 0), - DEFINE_PROP_BOOL("eui64-default", NvmeNamespace, params.eui64_default, + DEFINE_PROP_BOOL("eui64-default", NvmeNamespaceDevice, params.eui64_default, true), DEFINE_PROP_END_OF_LIST(), }; -static void nvme_ns_class_init(ObjectClass *oc, void *data) +static void nvme_nsdev_class_init(ObjectClass *oc, void *data) { DeviceClass *dc = DEVICE_CLASS(oc); set_bit(DEVICE_CATEGORY_STORAGE, dc->categories); dc->bus_type = TYPE_NVME_BUS; - dc->realize = nvme_ns_realize; - dc->unrealize = nvme_ns_unrealize; - device_class_set_props(dc, nvme_ns_props); + dc->realize = nvme_nsdev_realize; + dc->unrealize = nvme_nsdev_unrealize; + device_class_set_props(dc, nvme_nsdev_props); dc->desc = "Virtual NVMe namespace"; } -static void nvme_ns_instance_init(Object *obj) +static void nvme_nsdev_instance_init(Object *obj) { - NvmeNamespace *ns = NVME_NS(obj); - char *bootindex = g_strdup_printf("/namespace@%d,0", ns->params.nsid); + NvmeNamespaceDevice *nsdev = NVME_NAMESPACE_DEVICE(obj); + char *bootindex = g_strdup_printf("/namespace@%d,0", + nsdev->params.nsid); - device_add_bootindex_property(obj, &ns->bootindex, "bootindex", + device_add_bootindex_property(obj, &nsdev->bootindex, "bootindex", bootindex, DEVICE(obj)); g_free(bootindex); } -static const TypeInfo nvme_ns_info = { - .name = TYPE_NVME_NS, +static const TypeInfo nvme_nsdev_info = { + .name = TYPE_NVME_NAMESPACE_DEVICE, .parent = TYPE_DEVICE, - .class_init = nvme_ns_class_init, - .instance_size = sizeof(NvmeNamespace), - .instance_init = nvme_ns_instance_init, + .class_init = nvme_nsdev_class_init, + .instance_size = sizeof(NvmeNamespaceDevice), + .instance_init = nvme_nsdev_instance_init, }; -static void nvme_ns_register_types(void) +static void register_types(void) { - type_register_static(&nvme_ns_info); + type_register_static(&nvme_nsdev_info); } -type_init(nvme_ns_register_types) +type_init(register_types) diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index 525bfd0ca831..4ae4b4e5ffe1 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -80,9 +80,8 @@ static inline NvmeNamespace *nvme_subsys_ns(NvmeSubsystem *subsys, return subsys->namespaces[nsid]; } -#define TYPE_NVME_NS "nvme-ns" -#define NVME_NS(obj) \ - OBJECT_CHECK(NvmeNamespace, (obj), TYPE_NVME_NS) +#define TYPE_NVME_NAMESPACE_DEVICE "nvme-ns" +OBJECT_DECLARE_SIMPLE_TYPE(NvmeNamespaceDevice, NVME_NAMESPACE_DEVICE) typedef struct NvmeNamespaceParams { bool detached; @@ -170,19 +169,26 @@ typedef struct NvmeNamespaceNvm { unsigned long flags; } NvmeNamespaceNvm; +enum NvmeNamespaceFlags { + NVME_NS_SHARED = 1 << 0, +}; + typedef struct NvmeNamespace { - DeviceState parent_obj; - BlockConf blkconf; - int32_t bootindex; + uint32_t nsid; + uint8_t csi; + QemuUUID uuid; + union { + uint64_t v; + uint8_t a[8]; + } eui64; + NvmeNGUID nguid; + + unsigned long flags; + const uint32_t *iocs; - uint8_t csi; uint16_t status; int attached; - QTAILQ_ENTRY(NvmeNamespace) entry; - - NvmeNamespaceParams params; - struct { uint32_t err_rec; } features; @@ -199,10 +205,19 @@ static inline BlockBackend *nvme_blk(NvmeNamespace *ns) return NVME_NAMESPACE_NVM(ns)->blk; } +typedef struct NvmeNamespaceDevice { + DeviceState parent_obj; + BlockConf blkconf; + int32_t bootindex; + + NvmeNamespace ns; + NvmeNamespaceParams params; +} NvmeNamespaceDevice; + static inline uint32_t nvme_nsid(NvmeNamespace *ns) { if (ns) { - return ns->params.nsid; + return ns->nsid; } return 0; @@ -228,8 +243,8 @@ static inline bool nvme_ns_ext(NvmeNamespaceNvm *nvm) return !!NVME_ID_NS_FLBAS_EXTENDED(nvm->id_ns.flbas); } -void nvme_ns_init_format(NvmeNamespace *ns); -int nvme_ns_setup(NvmeNamespace *ns, Error **errp); +void nvme_ns_nvm_init_format(NvmeNamespaceNvm *nvm); +void nvme_ns_init(NvmeNamespace *ns); void nvme_ns_drain(NvmeNamespace *ns); void nvme_ns_shutdown(NvmeNamespace *ns); void nvme_ns_cleanup(NvmeNamespace *ns); @@ -424,7 +439,7 @@ typedef struct NvmeCtrl { NvmeSubsystem *subsys; - NvmeNamespace namespace; + NvmeNamespaceDevice namespace; NvmeNamespace *namespaces[NVME_MAX_NAMESPACES + 1]; NvmeSQueue **sq; NvmeCQueue **cq; From patchwork Tue Sep 14 20:37:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 1528138 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=irrelevant.dk header.i=@irrelevant.dk header.a=rsa-sha256 header.s=fm1 header.b=TyTZ0Obs; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=UpTEqpgV; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4H8FzP34vSz9sRN for ; Wed, 15 Sep 2021 06:57:37 +1000 (AEST) Received: from localhost ([::1]:46548 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mQFUl-0002yc-3r for incoming@patchwork.ozlabs.org; Tue, 14 Sep 2021 16:57:35 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55070) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFC3-0005nO-Ng; Tue, 14 Sep 2021 16:38:15 -0400 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:58135) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFBz-0004qx-5J; Tue, 14 Sep 2021 16:38:15 -0400 Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.west.internal (Postfix) with ESMTP id 15C5332009E4; Tue, 14 Sep 2021 16:38:09 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Tue, 14 Sep 2021 16:38:09 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=f27lOTSg5N0OP TgCDhuCqfD01V2geFi5vNEn9zraqdQ=; b=TyTZ0ObsQEn+bZnE8MM0GrussGIzc bLVuQEcgRfDZT23OUKP//frYLGHMsiUowV4PWSWTQysYks0cWPPj7VIDSEKdtHiy qEOzR/S3wA0paB8pEu2PdELvt/s0SKoWaKBhhigVoZMsWXDbYrKykJ2cNjtSXJPZ Qx5w/4BcP8SmEoZbb488sKU8PYEgHu+zHnkt/Erg7z1QGzfvVqfY0AMf5rj5uMBB 8DmufByZwrdb3qTNB3Olkd311x6d0RKOJsbdkMlNmBdrp5UECub9u+SQes1QWkPJ okZ/6pGQNFHVM00iiRcLIZMPP+YuS96bQkLmBqnjlX38nEZyTdcyogQVw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=f27lOTSg5N0OPTgCDhuCqfD01V2geFi5vNEn9zraqdQ=; b=UpTEqpgV yySrlE2Qu4EmouOkR8tQ3xefJenapD1p7TB2LfNbufr8iRgN95OZZEvO2cha68IR tdDXWWn7lmuaYwvdNKQmCt8vCaxd7p6yd3xCHrIqUt/Y3+cFSH2pFo02C54ddUm3 y/+0JICjoByzYrlq6hJJr8sx50/6ERhhEoG8K55DH4RzA5E8PWj333RSCyXj8z1p bhpnZXmkKm+/9dhCnOLiAja0lFBZQznxdFpIdMnZB+VcC24xfH4eRIrSyxG7xzJp /1scVg9ctFF1o5acHKrW4wmXLK7iyA93lXYkW5AkALsRX9BBewjWNBG4xrJSyUpG hdrW6yl06UCuQw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudegledgudegkecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghu shculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrth htvghrnhepudeiteefjeejgeevhfejuedvueeigfffgfdvteetveetjeejueegudfhheeg ffeknecuffhomhgrihhnpehquggvvhdrihgunecuvehluhhsthgvrhfuihiivgeptdenuc frrghrrghmpehmrghilhhfrhhomhepihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 14 Sep 2021 16:38:06 -0400 (EDT) From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH RFC 08/13] hw/nvme: hoist qdev state from controller Date: Tue, 14 Sep 2021 22:37:32 +0200 Message-Id: <20210914203737.182571-9-its@irrelevant.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210914203737.182571-1-its@irrelevant.dk> References: <20210914203737.182571-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.19; envelope-from=its@irrelevant.dk; helo=wout3-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , qemu-block@nongnu.org, Klaus Jensen , Markus Armbruster , Klaus Jensen , Hanna Reitz , Stefan Hajnoczi , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen Add an abstract object NvmeState. Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 382 +++++++++++++++++++++++++---------------------- hw/nvme/dif.c | 4 +- hw/nvme/dif.h | 2 +- hw/nvme/ns.c | 4 +- hw/nvme/nvme.h | 52 ++++--- hw/nvme/subsys.c | 4 +- 6 files changed, 239 insertions(+), 209 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index 966fba605d79..6a4f07b8d114 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -323,7 +323,7 @@ static int nvme_ns_zoned_aor_check(NvmeNamespaceZoned *zoned, uint32_t act, return NVME_SUCCESS; } -static bool nvme_addr_is_cmb(NvmeCtrl *n, hwaddr addr) +static bool nvme_addr_is_cmb(NvmeState *n, hwaddr addr) { hwaddr hi, lo; @@ -337,13 +337,13 @@ static bool nvme_addr_is_cmb(NvmeCtrl *n, hwaddr addr) return addr >= lo && addr < hi; } -static inline void *nvme_addr_to_cmb(NvmeCtrl *n, hwaddr addr) +static inline void *nvme_addr_to_cmb(NvmeState *n, hwaddr addr) { hwaddr base = n->params.legacy_cmb ? n->cmb.mem.addr : n->cmb.cba; return &n->cmb.buf[addr - base]; } -static bool nvme_addr_is_pmr(NvmeCtrl *n, hwaddr addr) +static bool nvme_addr_is_pmr(NvmeState *n, hwaddr addr) { hwaddr hi; @@ -356,12 +356,12 @@ static bool nvme_addr_is_pmr(NvmeCtrl *n, hwaddr addr) return addr >= n->pmr.cba && addr < hi; } -static inline void *nvme_addr_to_pmr(NvmeCtrl *n, hwaddr addr) +static inline void *nvme_addr_to_pmr(NvmeState *n, hwaddr addr) { return memory_region_get_ram_ptr(&n->pmr.dev->mr) + (addr - n->pmr.cba); } -static int nvme_addr_read(NvmeCtrl *n, hwaddr addr, void *buf, int size) +static int nvme_addr_read(NvmeState *n, hwaddr addr, void *buf, int size) { hwaddr hi = addr + size - 1; if (hi < addr) { @@ -381,7 +381,7 @@ static int nvme_addr_read(NvmeCtrl *n, hwaddr addr, void *buf, int size) return pci_dma_read(&n->parent_obj, addr, buf, size); } -static int nvme_addr_write(NvmeCtrl *n, hwaddr addr, void *buf, int size) +static int nvme_addr_write(NvmeState *n, hwaddr addr, void *buf, int size) { hwaddr hi = addr + size - 1; if (hi < addr) { @@ -401,18 +401,18 @@ static int nvme_addr_write(NvmeCtrl *n, hwaddr addr, void *buf, int size) return pci_dma_write(&n->parent_obj, addr, buf, size); } -static bool nvme_nsid_valid(NvmeCtrl *n, uint32_t nsid) +static bool nvme_nsid_valid(NvmeState *n, uint32_t nsid) { return nsid && (nsid == NVME_NSID_BROADCAST || nsid <= NVME_MAX_NAMESPACES); } -static int nvme_check_sqid(NvmeCtrl *n, uint16_t sqid) +static int nvme_check_sqid(NvmeState *n, uint16_t sqid) { return sqid < n->params.max_ioqpairs + 1 && n->sq[sqid] != NULL ? 0 : -1; } -static int nvme_check_cqid(NvmeCtrl *n, uint16_t cqid) +static int nvme_check_cqid(NvmeState *n, uint16_t cqid) { return cqid < n->params.max_ioqpairs + 1 && n->cq[cqid] != NULL ? 0 : -1; } @@ -441,7 +441,7 @@ static uint8_t nvme_sq_empty(NvmeSQueue *sq) return sq->head == sq->tail; } -static void nvme_irq_check(NvmeCtrl *n) +static void nvme_irq_check(NvmeState *n) { uint32_t intms = ldl_le_p(&n->bar.intms); @@ -455,7 +455,7 @@ static void nvme_irq_check(NvmeCtrl *n) } } -static void nvme_irq_assert(NvmeCtrl *n, NvmeCQueue *cq) +static void nvme_irq_assert(NvmeState *n, NvmeCQueue *cq) { if (cq->irq_enabled) { if (msix_enabled(&(n->parent_obj))) { @@ -472,7 +472,7 @@ static void nvme_irq_assert(NvmeCtrl *n, NvmeCQueue *cq) } } -static void nvme_irq_deassert(NvmeCtrl *n, NvmeCQueue *cq) +static void nvme_irq_deassert(NvmeState *n, NvmeCQueue *cq) { if (cq->irq_enabled) { if (msix_enabled(&(n->parent_obj))) { @@ -496,7 +496,7 @@ static void nvme_req_clear(NvmeRequest *req) req->status = NVME_SUCCESS; } -static inline void nvme_sg_init(NvmeCtrl *n, NvmeSg *sg, bool dma) +static inline void nvme_sg_init(NvmeState *n, NvmeSg *sg, bool dma) { if (dma) { pci_dma_sglist_init(&sg->qsg, &n->parent_obj, 0); @@ -574,7 +574,7 @@ static void nvme_sg_split(NvmeSg *sg, NvmeNamespaceNvm *nvm, NvmeSg *data, } } -static uint16_t nvme_map_addr_cmb(NvmeCtrl *n, QEMUIOVector *iov, hwaddr addr, +static uint16_t nvme_map_addr_cmb(NvmeState *n, QEMUIOVector *iov, hwaddr addr, size_t len) { if (!len) { @@ -592,7 +592,7 @@ static uint16_t nvme_map_addr_cmb(NvmeCtrl *n, QEMUIOVector *iov, hwaddr addr, return NVME_SUCCESS; } -static uint16_t nvme_map_addr_pmr(NvmeCtrl *n, QEMUIOVector *iov, hwaddr addr, +static uint16_t nvme_map_addr_pmr(NvmeState *n, QEMUIOVector *iov, hwaddr addr, size_t len) { if (!len) { @@ -608,7 +608,7 @@ static uint16_t nvme_map_addr_pmr(NvmeCtrl *n, QEMUIOVector *iov, hwaddr addr, return NVME_SUCCESS; } -static uint16_t nvme_map_addr(NvmeCtrl *n, NvmeSg *sg, hwaddr addr, size_t len) +static uint16_t nvme_map_addr(NvmeState *n, NvmeSg *sg, hwaddr addr, size_t len) { bool cmb = false, pmr = false; @@ -658,12 +658,12 @@ max_mappings_exceeded: return NVME_INTERNAL_DEV_ERROR | NVME_DNR; } -static inline bool nvme_addr_is_dma(NvmeCtrl *n, hwaddr addr) +static inline bool nvme_addr_is_dma(NvmeState *n, hwaddr addr) { return !(nvme_addr_is_cmb(n, addr) || nvme_addr_is_pmr(n, addr)); } -static uint16_t nvme_map_prp(NvmeCtrl *n, NvmeSg *sg, uint64_t prp1, +static uint16_t nvme_map_prp(NvmeState *n, NvmeSg *sg, uint64_t prp1, uint64_t prp2, uint32_t len) { hwaddr trans_len = n->page_size - (prp1 % n->page_size); @@ -764,7 +764,7 @@ unmap: * Map 'nsgld' data descriptors from 'segment'. The function will subtract the * number of bytes mapped in len. */ -static uint16_t nvme_map_sgl_data(NvmeCtrl *n, NvmeSg *sg, +static uint16_t nvme_map_sgl_data(NvmeState *n, NvmeSg *sg, NvmeSglDescriptor *segment, uint64_t nsgld, size_t *len, NvmeCmd *cmd) { @@ -834,7 +834,7 @@ next: return NVME_SUCCESS; } -static uint16_t nvme_map_sgl(NvmeCtrl *n, NvmeSg *sg, NvmeSglDescriptor sgl, +static uint16_t nvme_map_sgl(NvmeState *n, NvmeSg *sg, NvmeSglDescriptor sgl, size_t len, NvmeCmd *cmd) { /* @@ -977,7 +977,7 @@ unmap: return status; } -uint16_t nvme_map_dptr(NvmeCtrl *n, NvmeSg *sg, size_t len, +uint16_t nvme_map_dptr(NvmeState *n, NvmeSg *sg, size_t len, NvmeCmd *cmd) { uint64_t prp1, prp2; @@ -996,7 +996,7 @@ uint16_t nvme_map_dptr(NvmeCtrl *n, NvmeSg *sg, size_t len, } } -static uint16_t nvme_map_mptr(NvmeCtrl *n, NvmeSg *sg, size_t len, +static uint16_t nvme_map_mptr(NvmeState *n, NvmeSg *sg, size_t len, NvmeCmd *cmd) { int psdt = NVME_CMD_FLAGS_PSDT(cmd->flags); @@ -1027,7 +1027,7 @@ static uint16_t nvme_map_mptr(NvmeCtrl *n, NvmeSg *sg, size_t len, return status; } -static uint16_t nvme_map_data(NvmeCtrl *n, uint32_t nlb, NvmeRequest *req) +static uint16_t nvme_map_data(NvmeState *n, uint32_t nlb, NvmeRequest *req) { NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(req->ns); NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; @@ -1056,7 +1056,7 @@ static uint16_t nvme_map_data(NvmeCtrl *n, uint32_t nlb, NvmeRequest *req) return nvme_map_dptr(n, &req->sg, len, &req->cmd); } -static uint16_t nvme_map_mdata(NvmeCtrl *n, uint32_t nlb, NvmeRequest *req) +static uint16_t nvme_map_mdata(NvmeState *n, uint32_t nlb, NvmeRequest *req) { NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(req->ns); size_t len = nvme_m2b(nvm, nlb); @@ -1082,7 +1082,7 @@ static uint16_t nvme_map_mdata(NvmeCtrl *n, uint32_t nlb, NvmeRequest *req) return nvme_map_mptr(n, &req->sg, len, &req->cmd); } -static uint16_t nvme_tx_interleaved(NvmeCtrl *n, NvmeSg *sg, uint8_t *ptr, +static uint16_t nvme_tx_interleaved(NvmeState *n, NvmeSg *sg, uint8_t *ptr, uint32_t len, uint32_t bytes, int32_t skip_bytes, int64_t offset, NvmeTxDirection dir) @@ -1144,7 +1144,7 @@ static uint16_t nvme_tx_interleaved(NvmeCtrl *n, NvmeSg *sg, uint8_t *ptr, return NVME_SUCCESS; } -static uint16_t nvme_tx(NvmeCtrl *n, NvmeSg *sg, uint8_t *ptr, uint32_t len, +static uint16_t nvme_tx(NvmeState *n, NvmeSg *sg, uint8_t *ptr, uint32_t len, NvmeTxDirection dir) { assert(sg->flags & NVME_SG_ALLOC); @@ -1180,7 +1180,7 @@ static uint16_t nvme_tx(NvmeCtrl *n, NvmeSg *sg, uint8_t *ptr, uint32_t len, return NVME_SUCCESS; } -static inline uint16_t nvme_c2h(NvmeCtrl *n, uint8_t *ptr, uint32_t len, +static inline uint16_t nvme_c2h(NvmeState *n, uint8_t *ptr, uint32_t len, NvmeRequest *req) { uint16_t status; @@ -1193,7 +1193,7 @@ static inline uint16_t nvme_c2h(NvmeCtrl *n, uint8_t *ptr, uint32_t len, return nvme_tx(n, &req->sg, ptr, len, NVME_TX_DIRECTION_FROM_DEVICE); } -static inline uint16_t nvme_h2c(NvmeCtrl *n, uint8_t *ptr, uint32_t len, +static inline uint16_t nvme_h2c(NvmeState *n, uint8_t *ptr, uint32_t len, NvmeRequest *req) { uint16_t status; @@ -1206,7 +1206,7 @@ static inline uint16_t nvme_h2c(NvmeCtrl *n, uint8_t *ptr, uint32_t len, return nvme_tx(n, &req->sg, ptr, len, NVME_TX_DIRECTION_TO_DEVICE); } -uint16_t nvme_bounce_data(NvmeCtrl *n, uint8_t *ptr, uint32_t len, +uint16_t nvme_bounce_data(NvmeState *n, uint8_t *ptr, uint32_t len, NvmeTxDirection dir, NvmeRequest *req) { NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(req->ns); @@ -1222,7 +1222,7 @@ uint16_t nvme_bounce_data(NvmeCtrl *n, uint8_t *ptr, uint32_t len, return nvme_tx(n, &req->sg, ptr, len, dir); } -uint16_t nvme_bounce_mdata(NvmeCtrl *n, uint8_t *ptr, uint32_t len, +uint16_t nvme_bounce_mdata(NvmeState *n, uint8_t *ptr, uint32_t len, NvmeTxDirection dir, NvmeRequest *req) { NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(req->ns); @@ -1272,7 +1272,7 @@ static inline void nvme_blk_write(BlockBackend *blk, int64_t offset, static void nvme_post_cqes(void *opaque) { NvmeCQueue *cq = opaque; - NvmeCtrl *n = cq->ctrl; + NvmeState *n = cq->ctrl; NvmeRequest *req, *next; bool pending = cq->head != cq->tail; int ret; @@ -1332,7 +1332,7 @@ static void nvme_enqueue_req_completion(NvmeCQueue *cq, NvmeRequest *req) static void nvme_process_aers(void *opaque) { - NvmeCtrl *n = opaque; + NvmeState *n = opaque; NvmeAsyncEvent *event, *next; trace_pci_nvme_process_aers(n->aer_queued); @@ -1374,7 +1374,7 @@ static void nvme_process_aers(void *opaque) } } -static void nvme_enqueue_event(NvmeCtrl *n, uint8_t event_type, +static void nvme_enqueue_event(NvmeState *n, uint8_t event_type, uint8_t event_info, uint8_t log_page) { NvmeAsyncEvent *event; @@ -1399,7 +1399,7 @@ static void nvme_enqueue_event(NvmeCtrl *n, uint8_t event_type, nvme_process_aers(n); } -static void nvme_smart_event(NvmeCtrl *n, uint8_t event) +static void nvme_smart_event(NvmeState *n, uint8_t event) { uint8_t aer_info; @@ -1428,7 +1428,7 @@ static void nvme_smart_event(NvmeCtrl *n, uint8_t event) nvme_enqueue_event(n, NVME_AER_TYPE_SMART, aer_info, NVME_LOG_SMART_INFO); } -static void nvme_clear_events(NvmeCtrl *n, uint8_t event_type) +static void nvme_clear_events(NvmeState *n, uint8_t event_type) { n->aer_mask &= ~(1 << event_type); if (!QTAILQ_EMPTY(&n->aer_queue)) { @@ -1436,7 +1436,7 @@ static void nvme_clear_events(NvmeCtrl *n, uint8_t event_type) } } -static inline uint16_t nvme_check_mdts(NvmeCtrl *n, size_t len) +static inline uint16_t nvme_check_mdts(NvmeState *n, size_t len) { uint8_t mdts = n->params.mdts; @@ -1745,7 +1745,7 @@ enum { NVME_ZRM_AUTO = 1 << 0, }; -static uint16_t nvme_zrm_open_flags(NvmeCtrl *n, NvmeNamespaceZoned *zoned, +static uint16_t nvme_zrm_open_flags(NvmeState *n, NvmeNamespaceZoned *zoned, NvmeZone *zone, int flags) { int act = 0; @@ -1796,13 +1796,13 @@ static uint16_t nvme_zrm_open_flags(NvmeCtrl *n, NvmeNamespaceZoned *zoned, } } -static inline uint16_t nvme_zrm_auto(NvmeCtrl *n, NvmeNamespaceZoned *zoned, +static inline uint16_t nvme_zrm_auto(NvmeState *n, NvmeNamespaceZoned *zoned, NvmeZone *zone) { return nvme_zrm_open_flags(n, zoned, zone, NVME_ZRM_AUTO); } -static inline uint16_t nvme_zrm_open(NvmeCtrl *n, NvmeNamespaceZoned *zoned, +static inline uint16_t nvme_zrm_open(NvmeState *n, NvmeNamespaceZoned *zoned, NvmeZone *zone) { return nvme_zrm_open_flags(n, zoned, zone, 0); @@ -1918,7 +1918,7 @@ static void nvme_rw_cb(void *opaque, int ret) uint16_t status; nvme_sg_unmap(&req->sg); - status = nvme_map_mdata(nvme_ctrl(req), nlb, req); + status = nvme_map_mdata(nvme_state(req), nlb, req); if (status) { ret = -EFAULT; goto out; @@ -2038,7 +2038,7 @@ static void nvme_compare_mdata_cb(void *opaque, int ret) NvmeRequest *req = opaque; NvmeNamespace *ns = req->ns; NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); - NvmeCtrl *n = nvme_ctrl(req); + NvmeState *n = nvme_state(req); NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; uint8_t prinfo = NVME_RW_PRINFO(le16_to_cpu(rw->control)); uint16_t apptag = le16_to_cpu(rw->apptag); @@ -2123,7 +2123,7 @@ out: static void nvme_compare_data_cb(void *opaque, int ret) { NvmeRequest *req = opaque; - NvmeCtrl *n = nvme_ctrl(req); + NvmeState *n = nvme_state(req); NvmeNamespace *ns = req->ns; NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); BlockBackend *blk = nvme_blk(ns); @@ -2286,7 +2286,7 @@ static void nvme_dsm_cb(void *opaque, int ret) { NvmeDSMAIOCB *iocb = opaque; NvmeRequest *req = iocb->req; - NvmeCtrl *n = nvme_ctrl(req); + NvmeState *n = nvme_state(req); NvmeNamespace *ns = req->ns; NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeDsmRange *range; @@ -2330,7 +2330,7 @@ done: qemu_bh_schedule(iocb->bh); } -static uint16_t nvme_dsm(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_dsm(NvmeState *n, NvmeRequest *req) { NvmeNamespace *ns = req->ns; NvmeDsmCmd *dsm = (NvmeDsmCmd *) &req->cmd; @@ -2366,7 +2366,7 @@ static uint16_t nvme_dsm(NvmeCtrl *n, NvmeRequest *req) return status; } -static uint16_t nvme_verify(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_verify(NvmeState *n, NvmeRequest *req) { NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns = req->ns; @@ -2774,8 +2774,7 @@ done: } } - -static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_copy(NvmeState *n, NvmeRequest *req) { NvmeNamespace *ns = req->ns; NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); @@ -2863,7 +2862,7 @@ invalid: return status; } -static uint16_t nvme_compare(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_compare(NvmeState *n, NvmeRequest *req) { NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns = req->ns; @@ -2984,7 +2983,7 @@ static void nvme_flush_bh(void *opaque) { NvmeFlushAIOCB *iocb = opaque; NvmeRequest *req = iocb->req; - NvmeCtrl *n = nvme_ctrl(req); + NvmeState *n = nvme_state(req); int i; if (iocb->ret < 0) { @@ -3019,7 +3018,7 @@ done: return; } -static uint16_t nvme_flush(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_flush(NvmeState *n, NvmeRequest *req) { NvmeFlushAIOCB *iocb; uint32_t nsid = le32_to_cpu(req->cmd.nsid); @@ -3062,7 +3061,7 @@ out: return status; } -static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_read(NvmeState *n, NvmeRequest *req) { NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns = req->ns; @@ -3136,7 +3135,7 @@ invalid: return status | NVME_DNR; } -static uint16_t nvme_do_write(NvmeCtrl *n, NvmeRequest *req, bool append, +static uint16_t nvme_do_write(NvmeState *n, NvmeRequest *req, bool append, bool wrz) { NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; @@ -3272,17 +3271,17 @@ invalid: return status | NVME_DNR; } -static inline uint16_t nvme_write(NvmeCtrl *n, NvmeRequest *req) +static inline uint16_t nvme_write(NvmeState *n, NvmeRequest *req) { return nvme_do_write(n, req, false, false); } -static inline uint16_t nvme_write_zeroes(NvmeCtrl *n, NvmeRequest *req) +static inline uint16_t nvme_write_zeroes(NvmeState *n, NvmeRequest *req) { return nvme_do_write(n, req, false, true); } -static inline uint16_t nvme_zone_append(NvmeCtrl *n, NvmeRequest *req) +static inline uint16_t nvme_zone_append(NvmeState *n, NvmeRequest *req) { return nvme_do_write(n, req, true, false); } @@ -3330,7 +3329,7 @@ enum NvmeZoneProcessingMask { static uint16_t nvme_open_zone(NvmeNamespaceZoned *zoned, NvmeZone *zone, NvmeZoneState state, NvmeRequest *req) { - return nvme_zrm_open(nvme_ctrl(req), zoned, zone); + return nvme_zrm_open(nvme_state(req), zoned, zone); } static uint16_t nvme_close_zone(NvmeNamespaceZoned *zoned, NvmeZone *zone, @@ -3609,7 +3608,7 @@ done: } } -static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_zone_mgmt_send(NvmeState *n, NvmeRequest *req) { NvmeCmd *cmd = (NvmeCmd *)&req->cmd; NvmeNamespace *ns = req->ns; @@ -3758,7 +3757,7 @@ static bool nvme_zone_matches_filter(uint32_t zafs, NvmeZone *zl) } } -static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_zone_mgmt_recv(NvmeState *n, NvmeRequest *req) { NvmeCmd *cmd = (NvmeCmd *)&req->cmd; NvmeNamespace *ns = req->ns; @@ -3866,7 +3865,7 @@ static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, NvmeRequest *req) return status; } -static uint16_t nvme_io_cmd(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_io_cmd(NvmeState *n, NvmeRequest *req) { NvmeNamespace *ns; uint32_t nsid = le32_to_cpu(req->cmd.nsid); @@ -3945,7 +3944,7 @@ static uint16_t nvme_io_cmd(NvmeCtrl *n, NvmeRequest *req) return NVME_INVALID_OPCODE | NVME_DNR; } -static void nvme_free_sq(NvmeSQueue *sq, NvmeCtrl *n) +static void nvme_free_sq(NvmeSQueue *sq, NvmeState *n) { n->sq[sq->sqid] = NULL; timer_free(sq->timer); @@ -3955,7 +3954,7 @@ static void nvme_free_sq(NvmeSQueue *sq, NvmeCtrl *n) } } -static uint16_t nvme_del_sq(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_del_sq(NvmeState *n, NvmeRequest *req) { NvmeDeleteQ *c = (NvmeDeleteQ *)&req->cmd; NvmeRequest *r, *next; @@ -3996,7 +3995,7 @@ static uint16_t nvme_del_sq(NvmeCtrl *n, NvmeRequest *req) return NVME_SUCCESS; } -static void nvme_init_sq(NvmeSQueue *sq, NvmeCtrl *n, uint64_t dma_addr, +static void nvme_init_sq(NvmeSQueue *sq, NvmeState *n, uint64_t dma_addr, uint16_t sqid, uint16_t cqid, uint16_t size) { int i; @@ -4024,7 +4023,7 @@ static void nvme_init_sq(NvmeSQueue *sq, NvmeCtrl *n, uint64_t dma_addr, n->sq[sqid] = sq; } -static uint16_t nvme_create_sq(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_create_sq(NvmeState *n, NvmeRequest *req) { NvmeSQueue *sq; NvmeCreateSq *c = (NvmeCreateSq *)&req->cmd; @@ -4080,7 +4079,7 @@ static void nvme_set_blk_stats(NvmeNamespace *ns, struct nvme_stats *stats) stats->write_commands += s->nr_ops[BLOCK_ACCT_WRITE]; } -static uint16_t nvme_smart_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len, +static uint16_t nvme_smart_info(NvmeState *n, uint8_t rae, uint32_t buf_len, uint64_t off, NvmeRequest *req) { uint32_t nsid = le32_to_cpu(req->cmd.nsid); @@ -4140,7 +4139,7 @@ static uint16_t nvme_smart_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len, return nvme_c2h(n, (uint8_t *) &smart + off, trans_len, req); } -static uint16_t nvme_fw_log_info(NvmeCtrl *n, uint32_t buf_len, uint64_t off, +static uint16_t nvme_fw_log_info(NvmeState *n, uint32_t buf_len, uint64_t off, NvmeRequest *req) { uint32_t trans_len; @@ -4158,7 +4157,7 @@ static uint16_t nvme_fw_log_info(NvmeCtrl *n, uint32_t buf_len, uint64_t off, return nvme_c2h(n, (uint8_t *) &fw_log + off, trans_len, req); } -static uint16_t nvme_error_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len, +static uint16_t nvme_error_info(NvmeState *n, uint8_t rae, uint32_t buf_len, uint64_t off, NvmeRequest *req) { uint32_t trans_len; @@ -4178,7 +4177,7 @@ static uint16_t nvme_error_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len, return nvme_c2h(n, (uint8_t *)&errlog, trans_len, req); } -static uint16_t nvme_changed_nslist(NvmeCtrl *n, uint8_t rae, uint32_t buf_len, +static uint16_t nvme_changed_nslist(NvmeState *n, uint8_t rae, uint32_t buf_len, uint64_t off, NvmeRequest *req) { uint32_t nslist[1024]; @@ -4220,7 +4219,7 @@ static uint16_t nvme_changed_nslist(NvmeCtrl *n, uint8_t rae, uint32_t buf_len, return nvme_c2h(n, ((uint8_t *)nslist) + off, trans_len, req); } -static uint16_t nvme_cmd_effects(NvmeCtrl *n, uint8_t csi, uint32_t buf_len, +static uint16_t nvme_cmd_effects(NvmeState *n, uint8_t csi, uint32_t buf_len, uint64_t off, NvmeRequest *req) { NvmeEffectsLog log = {}; @@ -4260,7 +4259,7 @@ static uint16_t nvme_cmd_effects(NvmeCtrl *n, uint8_t csi, uint32_t buf_len, return nvme_c2h(n, ((uint8_t *)&log) + off, trans_len, req); } -static uint16_t nvme_get_log(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_get_log(NvmeState *n, NvmeRequest *req) { NvmeCmd *cmd = &req->cmd; @@ -4313,7 +4312,7 @@ static uint16_t nvme_get_log(NvmeCtrl *n, NvmeRequest *req) } } -static void nvme_free_cq(NvmeCQueue *cq, NvmeCtrl *n) +static void nvme_free_cq(NvmeCQueue *cq, NvmeState *n) { n->cq[cq->cqid] = NULL; timer_free(cq->timer); @@ -4325,7 +4324,7 @@ static void nvme_free_cq(NvmeCQueue *cq, NvmeCtrl *n) } } -static uint16_t nvme_del_cq(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_del_cq(NvmeState *n, NvmeRequest *req) { NvmeDeleteQ *c = (NvmeDeleteQ *)&req->cmd; NvmeCQueue *cq; @@ -4352,7 +4351,7 @@ static uint16_t nvme_del_cq(NvmeCtrl *n, NvmeRequest *req) return NVME_SUCCESS; } -static void nvme_init_cq(NvmeCQueue *cq, NvmeCtrl *n, uint64_t dma_addr, +static void nvme_init_cq(NvmeCQueue *cq, NvmeState *n, uint64_t dma_addr, uint16_t cqid, uint16_t vector, uint16_t size, uint16_t irq_enabled) { @@ -4376,7 +4375,7 @@ static void nvme_init_cq(NvmeCQueue *cq, NvmeCtrl *n, uint64_t dma_addr, cq->timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, nvme_post_cqes, cq); } -static uint16_t nvme_create_cq(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_create_cq(NvmeState *n, NvmeRequest *req) { NvmeCQueue *cq; NvmeCreateCq *c = (NvmeCreateCq *)&req->cmd; @@ -4428,21 +4427,21 @@ static uint16_t nvme_create_cq(NvmeCtrl *n, NvmeRequest *req) return NVME_SUCCESS; } -static uint16_t nvme_rpt_empty_id_struct(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_rpt_empty_id_struct(NvmeState *n, NvmeRequest *req) { uint8_t id[NVME_IDENTIFY_DATA_SIZE] = {}; return nvme_c2h(n, id, sizeof(id), req); } -static uint16_t nvme_identify_ctrl(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_identify_ctrl(NvmeState *n, NvmeRequest *req) { trace_pci_nvme_identify_ctrl(); return nvme_c2h(n, (uint8_t *)&n->id_ctrl, sizeof(n->id_ctrl), req); } -static uint16_t nvme_identify_ctrl_csi(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_identify_ctrl_csi(NvmeState *n, NvmeRequest *req) { NvmeIdentify *c = (NvmeIdentify *)&req->cmd; uint8_t id[NVME_IDENTIFY_DATA_SIZE] = {}; @@ -4467,7 +4466,7 @@ static uint16_t nvme_identify_ctrl_csi(NvmeCtrl *n, NvmeRequest *req) return nvme_c2h(n, id, sizeof(id), req); } -static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeRequest *req, bool active) +static uint16_t nvme_identify_ns(NvmeState *n, NvmeRequest *req, bool active) { NvmeNamespace *ns; NvmeIdentify *c = (NvmeIdentify *)&req->cmd; @@ -4499,7 +4498,7 @@ static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeRequest *req, bool active) return NVME_INVALID_CMD_SET | NVME_DNR; } -static uint16_t nvme_identify_ctrl_list(NvmeCtrl *n, NvmeRequest *req, +static uint16_t nvme_identify_ctrl_list(NvmeState *n, NvmeRequest *req, bool attached) { NvmeIdentify *c = (NvmeIdentify *)&req->cmd; @@ -4508,7 +4507,7 @@ static uint16_t nvme_identify_ctrl_list(NvmeCtrl *n, NvmeRequest *req, uint16_t list[NVME_CONTROLLER_LIST_SIZE] = {}; uint16_t *ids = &list[1]; NvmeNamespace *ns; - NvmeCtrl *ctrl; + NvmeState *ctrl; int cntlid, nr_ids = 0; trace_pci_nvme_identify_ctrl_list(c->cns, min_id); @@ -4546,7 +4545,7 @@ static uint16_t nvme_identify_ctrl_list(NvmeCtrl *n, NvmeRequest *req, return nvme_c2h(n, (uint8_t *)list, sizeof(list), req); } -static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeRequest *req, +static uint16_t nvme_identify_ns_csi(NvmeState *n, NvmeRequest *req, bool active) { NvmeNamespace *ns; @@ -4581,7 +4580,7 @@ static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeRequest *req, return NVME_INVALID_FIELD | NVME_DNR; } -static uint16_t nvme_identify_nslist(NvmeCtrl *n, NvmeRequest *req, +static uint16_t nvme_identify_nslist(NvmeState *n, NvmeRequest *req, bool active) { NvmeNamespace *ns; @@ -4628,7 +4627,7 @@ static uint16_t nvme_identify_nslist(NvmeCtrl *n, NvmeRequest *req, return nvme_c2h(n, list, data_len, req); } -static uint16_t nvme_identify_nslist_csi(NvmeCtrl *n, NvmeRequest *req, +static uint16_t nvme_identify_nslist_csi(NvmeState *n, NvmeRequest *req, bool active) { NvmeNamespace *ns; @@ -4676,7 +4675,7 @@ static uint16_t nvme_identify_nslist_csi(NvmeCtrl *n, NvmeRequest *req, return nvme_c2h(n, list, data_len, req); } -static uint16_t nvme_identify_ns_descr_list(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_identify_ns_descr_list(NvmeState *n, NvmeRequest *req) { NvmeNamespace *ns; NvmeIdentify *c = (NvmeIdentify *)&req->cmd; @@ -4735,7 +4734,7 @@ static uint16_t nvme_identify_ns_descr_list(NvmeCtrl *n, NvmeRequest *req) return nvme_c2h(n, list, sizeof(list), req); } -static uint16_t nvme_identify_cmd_set(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_identify_cmd_set(NvmeState *n, NvmeRequest *req) { uint8_t list[NVME_IDENTIFY_DATA_SIZE] = {}; static const int data_len = sizeof(list); @@ -4748,7 +4747,7 @@ static uint16_t nvme_identify_cmd_set(NvmeCtrl *n, NvmeRequest *req) return nvme_c2h(n, list, data_len, req); } -static uint16_t nvme_identify(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_identify(NvmeState *n, NvmeRequest *req) { NvmeIdentify *c = (NvmeIdentify *)&req->cmd; @@ -4790,7 +4789,7 @@ static uint16_t nvme_identify(NvmeCtrl *n, NvmeRequest *req) } } -static uint16_t nvme_abort(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_abort(NvmeState *n, NvmeRequest *req) { uint16_t sqid = le32_to_cpu(req->cmd.cdw10) & 0xffff; @@ -4802,7 +4801,7 @@ static uint16_t nvme_abort(NvmeCtrl *n, NvmeRequest *req) return NVME_SUCCESS; } -static inline void nvme_set_timestamp(NvmeCtrl *n, uint64_t ts) +static inline void nvme_set_timestamp(NvmeState *n, uint64_t ts) { trace_pci_nvme_setfeat_timestamp(ts); @@ -4810,7 +4809,7 @@ static inline void nvme_set_timestamp(NvmeCtrl *n, uint64_t ts) n->timestamp_set_qemu_clock_ms = qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL); } -static inline uint64_t nvme_get_timestamp(const NvmeCtrl *n) +static inline uint64_t nvme_get_timestamp(const NvmeState *n) { uint64_t current_time = qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL); uint64_t elapsed_time = current_time - n->timestamp_set_qemu_clock_ms; @@ -4837,14 +4836,14 @@ static inline uint64_t nvme_get_timestamp(const NvmeCtrl *n) return cpu_to_le64(ts.all); } -static uint16_t nvme_get_feature_timestamp(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_get_feature_timestamp(NvmeState *n, NvmeRequest *req) { uint64_t timestamp = nvme_get_timestamp(n); return nvme_c2h(n, (uint8_t *)×tamp, sizeof(timestamp), req); } -static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_get_feature(NvmeState *n, NvmeRequest *req) { NvmeCmd *cmd = &req->cmd; uint32_t dw10 = le32_to_cpu(cmd->cdw10); @@ -4994,7 +4993,7 @@ out: return NVME_SUCCESS; } -static uint16_t nvme_set_feature_timestamp(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_set_feature_timestamp(NvmeState *n, NvmeRequest *req) { uint16_t ret; uint64_t timestamp; @@ -5009,7 +5008,7 @@ static uint16_t nvme_set_feature_timestamp(NvmeCtrl *n, NvmeRequest *req) return NVME_SUCCESS; } -static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_set_feature(NvmeState *n, NvmeRequest *req) { NvmeNamespace *ns = NULL; NvmeNamespaceNvm *nvm; @@ -5156,7 +5155,7 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req) return NVME_SUCCESS; } -static uint16_t nvme_aer(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_aer(NvmeState *n, NvmeRequest *req) { trace_pci_nvme_aer(nvme_cid(req)); @@ -5175,7 +5174,7 @@ static uint16_t nvme_aer(NvmeCtrl *n, NvmeRequest *req) return NVME_NO_COMPLETE; } -static void nvme_update_dmrsl(NvmeCtrl *n) +static void nvme_update_dmrsl(NvmeState *n) { int nsid; @@ -5193,7 +5192,7 @@ static void nvme_update_dmrsl(NvmeCtrl *n) } } -static void nvme_select_iocs_ns(NvmeCtrl *n, NvmeNamespace *ns) +static void nvme_select_iocs_ns(NvmeState *n, NvmeNamespace *ns) { uint32_t cc = ldl_le_p(&n->bar.cc); @@ -5214,10 +5213,10 @@ static void nvme_select_iocs_ns(NvmeCtrl *n, NvmeNamespace *ns) } } -static uint16_t nvme_ns_attachment(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_ns_attachment(NvmeState *n, NvmeRequest *req) { NvmeNamespace *ns; - NvmeCtrl *ctrl; + NvmeState *ctrl; uint16_t list[NVME_CONTROLLER_LIST_SIZE] = {}; uint32_t nsid = le32_to_cpu(req->cmd.nsid); uint32_t dw10 = le32_to_cpu(req->cmd.cdw10); @@ -5409,7 +5408,7 @@ static void nvme_format_bh(void *opaque) { NvmeFormatAIOCB *iocb = opaque; NvmeRequest *req = iocb->req; - NvmeCtrl *n = nvme_ctrl(req); + NvmeState *n = nvme_state(req); uint32_t dw10 = le32_to_cpu(req->cmd.cdw10); uint8_t lbaf = dw10 & 0xf; uint8_t pi = (dw10 >> 5) & 0x7; @@ -5453,7 +5452,7 @@ done: qemu_aio_unref(iocb); } -static uint16_t nvme_format(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_format(NvmeState *n, NvmeRequest *req) { NvmeFormatAIOCB *iocb; uint32_t nsid = le32_to_cpu(req->cmd.nsid); @@ -5494,7 +5493,7 @@ out: return status; } -static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_admin_cmd(NvmeState *n, NvmeRequest *req) { trace_pci_nvme_admin_cmd(nvme_cid(req), nvme_sqid(req), req->cmd.opcode, nvme_adm_opc_str(req->cmd.opcode)); @@ -5544,7 +5543,7 @@ static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeRequest *req) static void nvme_process_sq(void *opaque) { NvmeSQueue *sq = opaque; - NvmeCtrl *n = sq->ctrl; + NvmeState *n = sq->ctrl; NvmeCQueue *cq = n->cq[sq->cqid]; uint16_t status; @@ -5578,7 +5577,7 @@ static void nvme_process_sq(void *opaque) } } -static void nvme_ctrl_reset(NvmeCtrl *n) +static void nvme_ctrl_reset(NvmeState *n) { NvmeNamespace *ns; int i; @@ -5614,7 +5613,7 @@ static void nvme_ctrl_reset(NvmeCtrl *n) n->qs_created = false; } -static void nvme_ctrl_shutdown(NvmeCtrl *n) +static void nvme_ctrl_shutdown(NvmeState *n) { NvmeNamespace *ns; int i; @@ -5633,7 +5632,7 @@ static void nvme_ctrl_shutdown(NvmeCtrl *n) } } -static void nvme_select_iocs(NvmeCtrl *n) +static void nvme_select_iocs(NvmeState *n) { NvmeNamespace *ns; int i; @@ -5648,7 +5647,7 @@ static void nvme_select_iocs(NvmeCtrl *n) } } -static int nvme_start_ctrl(NvmeCtrl *n) +static int nvme_start_ctrl(NvmeState *n) { uint64_t cap = ldq_le_p(&n->bar.cap); uint32_t cc = ldl_le_p(&n->bar.cc); @@ -5745,7 +5744,7 @@ static int nvme_start_ctrl(NvmeCtrl *n) return 0; } -static void nvme_cmb_enable_regs(NvmeCtrl *n) +static void nvme_cmb_enable_regs(NvmeState *n) { uint32_t cmbloc = ldl_le_p(&n->bar.cmbloc); uint32_t cmbsz = ldl_le_p(&n->bar.cmbsz); @@ -5765,7 +5764,7 @@ static void nvme_cmb_enable_regs(NvmeCtrl *n) stl_le_p(&n->bar.cmbsz, cmbsz); } -static void nvme_write_bar(NvmeCtrl *n, hwaddr offset, uint64_t data, +static void nvme_write_bar(NvmeState *n, hwaddr offset, uint64_t data, unsigned size) { uint64_t cap = ldq_le_p(&n->bar.cap); @@ -6015,7 +6014,7 @@ static void nvme_write_bar(NvmeCtrl *n, hwaddr offset, uint64_t data, static uint64_t nvme_mmio_read(void *opaque, hwaddr addr, unsigned size) { - NvmeCtrl *n = (NvmeCtrl *)opaque; + NvmeState *n = (NvmeState *)opaque; uint8_t *ptr = (uint8_t *)&n->bar; trace_pci_nvme_mmio_read(addr, size); @@ -6053,7 +6052,7 @@ static uint64_t nvme_mmio_read(void *opaque, hwaddr addr, unsigned size) return ldn_le_p(ptr + addr, size); } -static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val) +static void nvme_process_db(NvmeState *n, hwaddr addr, int val) { uint32_t qid; @@ -6185,7 +6184,7 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val) static void nvme_mmio_write(void *opaque, hwaddr addr, uint64_t data, unsigned size) { - NvmeCtrl *n = (NvmeCtrl *)opaque; + NvmeState *n = (NvmeState *)opaque; trace_pci_nvme_mmio_write(addr, data, size); @@ -6209,13 +6208,13 @@ static const MemoryRegionOps nvme_mmio_ops = { static void nvme_cmb_write(void *opaque, hwaddr addr, uint64_t data, unsigned size) { - NvmeCtrl *n = (NvmeCtrl *)opaque; + NvmeState *n = (NvmeState *)opaque; stn_le_p(&n->cmb.buf[addr], size, data); } static uint64_t nvme_cmb_read(void *opaque, hwaddr addr, unsigned size) { - NvmeCtrl *n = (NvmeCtrl *)opaque; + NvmeState *n = (NvmeState *)opaque; return ldn_le_p(&n->cmb.buf[addr], size); } @@ -6229,7 +6228,7 @@ static const MemoryRegionOps nvme_cmb_ops = { }, }; -static void nvme_check_constraints(NvmeCtrl *n, Error **errp) +static int nvme_check_constraints(NvmeState *n, Error **errp) { NvmeParams *params = &n->params; @@ -6240,41 +6239,35 @@ static void nvme_check_constraints(NvmeCtrl *n, Error **errp) params->max_ioqpairs = params->num_queues - 1; } - if (n->namespace.blkconf.blk && n->subsys) { - error_setg(errp, "subsystem support is unavailable with legacy " - "namespace ('drive' property)"); - return; - } - if (params->max_ioqpairs < 1 || params->max_ioqpairs > NVME_MAX_IOQPAIRS) { error_setg(errp, "max_ioqpairs must be between 1 and %d", NVME_MAX_IOQPAIRS); - return; + return -1; } if (params->msix_qsize < 1 || params->msix_qsize > PCI_MSIX_FLAGS_QSIZE + 1) { error_setg(errp, "msix_qsize must be between 1 and %d", PCI_MSIX_FLAGS_QSIZE + 1); - return; + return -1; } if (!params->serial) { error_setg(errp, "serial property not set"); - return; + return -1; } if (n->pmr.dev) { if (host_memory_backend_is_mapped(n->pmr.dev)) { error_setg(errp, "can't use already busy memdev: %s", object_get_canonical_path_component(OBJECT(n->pmr.dev))); - return; + return -1; } if (!is_power_of_2(n->pmr.dev->size)) { error_setg(errp, "pmr backend size needs to be power of 2 in size"); - return; + return -1; } host_memory_backend_set_mapped(n->pmr.dev, true); @@ -6283,16 +6276,18 @@ static void nvme_check_constraints(NvmeCtrl *n, Error **errp) if (n->params.zasl > n->params.mdts) { error_setg(errp, "zoned.zasl (Zone Append Size Limit) must be less " "than or equal to mdts (Maximum Data Transfer Size)"); - return; + return -1; } if (!n->params.vsl) { error_setg(errp, "vsl must be non-zero"); - return; + return -1; } + + return 0; } -static void nvme_init_state(NvmeCtrl *n) +static void nvme_init_state(NvmeState *n) { /* add one to max_ioqpairs to account for the admin queue pair */ n->reg_size = pow2ceil(sizeof(NvmeBar) + @@ -6305,7 +6300,7 @@ static void nvme_init_state(NvmeCtrl *n) n->aer_reqs = g_new0(NvmeRequest *, n->params.aerl + 1); } -static void nvme_init_cmb(NvmeCtrl *n, PCIDevice *pci_dev) +static void nvme_init_cmb(NvmeState *n, PCIDevice *pci_dev) { uint64_t cmb_size = n->params.cmb_size_mb * MiB; uint64_t cap = ldq_le_p(&n->bar.cap); @@ -6327,7 +6322,7 @@ static void nvme_init_cmb(NvmeCtrl *n, PCIDevice *pci_dev) } } -static void nvme_init_pmr(NvmeCtrl *n, PCIDevice *pci_dev) +static void nvme_init_pmr(NvmeState *n, PCIDevice *pci_dev) { uint32_t pmrcap = ldl_le_p(&n->bar.pmrcap); @@ -6347,7 +6342,7 @@ static void nvme_init_pmr(NvmeCtrl *n, PCIDevice *pci_dev) memory_region_set_enabled(&n->pmr.dev->mr, false); } -static int nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp) +static int nvme_init_pci(NvmeState *n, PCIDevice *pci_dev, Error **errp) { uint8_t *pci_conf = pci_dev->config; uint64_t bar_size, msix_table_size, msix_pba_size; @@ -6412,7 +6407,7 @@ static int nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp) return 0; } -static void nvme_init_subnqn(NvmeCtrl *n) +static void nvme_init_subnqn(NvmeState *n) { NvmeSubsystem *subsys = n->subsys; NvmeIdCtrl *id = &n->id_ctrl; @@ -6425,7 +6420,7 @@ static void nvme_init_subnqn(NvmeCtrl *n) } } -static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pci_dev) +static void nvme_init_ctrl(NvmeState *n, PCIDevice *pci_dev) { NvmeIdCtrl *id = &n->id_ctrl; uint8_t *pci_conf = pci_dev->config; @@ -6523,7 +6518,7 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pci_dev) n->bar.intmc = n->bar.intms = 0; } -static int nvme_init_subsys(NvmeCtrl *n, Error **errp) +static int nvme_init_subsys(NvmeState *n, Error **errp) { int cntlid; @@ -6541,7 +6536,7 @@ static int nvme_init_subsys(NvmeCtrl *n, Error **errp) return 0; } -void nvme_attach_ns(NvmeCtrl *n, NvmeNamespace *ns) +void nvme_attach_ns(NvmeState *n, NvmeNamespace *ns) { NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); uint32_t nsid = ns->nsid; @@ -6556,16 +6551,20 @@ void nvme_attach_ns(NvmeCtrl *n, NvmeNamespace *ns) static void nvme_realize(PCIDevice *pci_dev, Error **errp) { - NvmeCtrl *n = NVME(pci_dev); - Error *local_err = NULL; + NvmeCtrl *ctrl = NVME_DEVICE(pci_dev); + NvmeState *n = NVME_STATE(ctrl); - nvme_check_constraints(n, &local_err); - if (local_err) { - error_propagate(errp, local_err); + if (ctrl->namespace.blkconf.blk && n->subsys) { + error_setg(errp, "subsystem support is unavailable with legacy " + "namespace ('drive' property)"); return; } - qbus_create_inplace(&n->bus, sizeof(NvmeBus), TYPE_NVME_BUS, + if (nvme_check_constraints(n, errp)) { + return; + } + + qbus_create_inplace(&ctrl->bus, sizeof(NvmeBus), TYPE_NVME_BUS, &pci_dev->qdev, n->parent_obj.qdev.id); nvme_init_state(n); @@ -6574,14 +6573,13 @@ static void nvme_realize(PCIDevice *pci_dev, Error **errp) } if (nvme_init_subsys(n, errp)) { - error_propagate(errp, local_err); return; } nvme_init_ctrl(n, pci_dev); /* setup a namespace if the controller drive property was given */ - if (n->namespace.blkconf.blk) { - NvmeNamespaceDevice *nsdev = &n->namespace; + if (ctrl->namespace.blkconf.blk) { + NvmeNamespaceDevice *nsdev = &ctrl->namespace; NvmeNamespace *ns = &nsdev->ns; ns->nsid = 1; @@ -6593,7 +6591,7 @@ static void nvme_realize(PCIDevice *pci_dev, Error **errp) static void nvme_exit(PCIDevice *pci_dev) { - NvmeCtrl *n = NVME(pci_dev); + NvmeState *n = NVME_STATE(pci_dev); NvmeNamespace *ns; int i; @@ -6625,33 +6623,37 @@ static void nvme_exit(PCIDevice *pci_dev) memory_region_del_subregion(&n->bar0, &n->iomem); } +static Property nvme_state_props[] = { + DEFINE_PROP_LINK("pmrdev", NvmeState, pmr.dev, TYPE_MEMORY_BACKEND, + HostMemoryBackend *), + DEFINE_PROP_LINK("subsys", NvmeState, subsys, TYPE_NVME_SUBSYS, + NvmeSubsystem *), + DEFINE_PROP_STRING("serial", NvmeState, params.serial), + DEFINE_PROP_UINT32("cmb_size_mb", NvmeState, params.cmb_size_mb, 0), + DEFINE_PROP_UINT32("num_queues", NvmeState, params.num_queues, 0), + DEFINE_PROP_UINT32("max_ioqpairs", NvmeState, params.max_ioqpairs, 64), + DEFINE_PROP_UINT16("msix_qsize", NvmeState, params.msix_qsize, 65), + DEFINE_PROP_UINT8("aerl", NvmeState, params.aerl, 3), + DEFINE_PROP_UINT32("aer_max_queued", NvmeState, params.aer_max_queued, 64), + DEFINE_PROP_UINT8("mdts", NvmeState, params.mdts, 7), + DEFINE_PROP_UINT8("vsl", NvmeState, params.vsl, 7), + DEFINE_PROP_BOOL("use-intel-id", NvmeState, params.use_intel_id, false), + DEFINE_PROP_BOOL("legacy-cmb", NvmeState, params.legacy_cmb, false), + DEFINE_PROP_UINT8("zoned.zasl", NvmeState, params.zasl, 0), + DEFINE_PROP_BOOL("zoned.auto_transition", NvmeState, + params.auto_transition_zones, true), + DEFINE_PROP_END_OF_LIST(), +}; + static Property nvme_props[] = { DEFINE_BLOCK_PROPERTIES(NvmeCtrl, namespace.blkconf), - DEFINE_PROP_LINK("pmrdev", NvmeCtrl, pmr.dev, TYPE_MEMORY_BACKEND, - HostMemoryBackend *), - DEFINE_PROP_LINK("subsys", NvmeCtrl, subsys, TYPE_NVME_SUBSYS, - NvmeSubsystem *), - DEFINE_PROP_STRING("serial", NvmeCtrl, params.serial), - DEFINE_PROP_UINT32("cmb_size_mb", NvmeCtrl, params.cmb_size_mb, 0), - DEFINE_PROP_UINT32("num_queues", NvmeCtrl, params.num_queues, 0), - DEFINE_PROP_UINT32("max_ioqpairs", NvmeCtrl, params.max_ioqpairs, 64), - DEFINE_PROP_UINT16("msix_qsize", NvmeCtrl, params.msix_qsize, 65), - DEFINE_PROP_UINT8("aerl", NvmeCtrl, params.aerl, 3), - DEFINE_PROP_UINT32("aer_max_queued", NvmeCtrl, params.aer_max_queued, 64), - DEFINE_PROP_UINT8("mdts", NvmeCtrl, params.mdts, 7), - DEFINE_PROP_UINT8("vsl", NvmeCtrl, params.vsl, 7), - DEFINE_PROP_BOOL("use-intel-id", NvmeCtrl, params.use_intel_id, false), - DEFINE_PROP_BOOL("legacy-cmb", NvmeCtrl, params.legacy_cmb, false), - DEFINE_PROP_UINT8("zoned.zasl", NvmeCtrl, params.zasl, 0), - DEFINE_PROP_BOOL("zoned.auto_transition", NvmeCtrl, - params.auto_transition_zones, true), DEFINE_PROP_END_OF_LIST(), }; static void nvme_get_smart_warning(Object *obj, Visitor *v, const char *name, void *opaque, Error **errp) { - NvmeCtrl *n = NVME(obj); + NvmeState *n = NVME_STATE(obj); uint8_t value = n->smart_critical_warning; visit_type_uint8(v, name, &value, errp); @@ -6660,7 +6662,7 @@ static void nvme_get_smart_warning(Object *obj, Visitor *v, const char *name, static void nvme_set_smart_warning(Object *obj, Visitor *v, const char *name, void *opaque, Error **errp) { - NvmeCtrl *n = NVME(obj); + NvmeState *n = NVME_STATE(obj); uint8_t value, old_value, cap = 0, index, event; if (!visit_type_uint8(v, name, &value, errp)) { @@ -6695,7 +6697,7 @@ static const VMStateDescription nvme_vmstate = { .unmigratable = 1, }; -static void nvme_class_init(ObjectClass *oc, void *data) +static void nvme_state_class_init(ObjectClass *oc, void *data) { DeviceClass *dc = DEVICE_CLASS(oc); PCIDeviceClass *pc = PCI_DEVICE_CLASS(oc); @@ -6707,35 +6709,54 @@ static void nvme_class_init(ObjectClass *oc, void *data) set_bit(DEVICE_CATEGORY_STORAGE, dc->categories); dc->desc = "Non-Volatile Memory Express"; - device_class_set_props(dc, nvme_props); + device_class_set_props(dc, nvme_state_props); dc->vmsd = &nvme_vmstate; } -static void nvme_instance_init(Object *obj) +static void nvme_state_instance_init(Object *obj) { - NvmeCtrl *n = NVME(obj); - - device_add_bootindex_property(obj, &n->namespace.blkconf.bootindex, - "bootindex", "/namespace@1,0", - DEVICE(obj)); - object_property_add(obj, "smart_critical_warning", "uint8", nvme_get_smart_warning, nvme_set_smart_warning, NULL, NULL); } -static const TypeInfo nvme_info = { - .name = TYPE_NVME, - .parent = TYPE_PCI_DEVICE, - .instance_size = sizeof(NvmeCtrl), - .instance_init = nvme_instance_init, - .class_init = nvme_class_init, +static const TypeInfo nvme_state_info = { + .name = TYPE_NVME_STATE, + .parent = TYPE_PCI_DEVICE, + .abstract = true, + .class_init = nvme_state_class_init, + .instance_size = sizeof(NvmeState), + .instance_init = nvme_state_instance_init, .interfaces = (InterfaceInfo[]) { { INTERFACE_PCIE_DEVICE }, - { } + { }, }, }; +static void nvme_class_init(ObjectClass *oc, void *data) +{ + DeviceClass *dc = DEVICE_CLASS(oc); + device_class_set_props(dc, nvme_props); +} + +static void nvme_instance_init(Object *obj) +{ + NvmeCtrl *ctrl = NVME_DEVICE(obj); + + device_add_bootindex_property(obj, &ctrl->namespace.blkconf.bootindex, + "bootindex", "/namespace@1,0", + DEVICE(obj)); +} + +static const TypeInfo nvme_info = { + .name = TYPE_NVME_DEVICE, + .parent = TYPE_NVME_STATE, + .class_init = nvme_class_init, + .instance_size = sizeof(NvmeCtrl), + .instance_init = nvme_instance_init, + .class_init = nvme_class_init, +}; + static const TypeInfo nvme_bus_info = { .name = TYPE_NVME_BUS, .parent = TYPE_BUS, @@ -6744,6 +6765,7 @@ static const TypeInfo nvme_bus_info = { static void nvme_register_types(void) { + type_register_static(&nvme_state_info); type_register_static(&nvme_info); type_register_static(&nvme_bus_info); } diff --git a/hw/nvme/dif.c b/hw/nvme/dif.c index 1b8f9ba2fb44..8ad517232c1d 100644 --- a/hw/nvme/dif.c +++ b/hw/nvme/dif.c @@ -248,7 +248,7 @@ static void nvme_dif_rw_check_cb(void *opaque, int ret) NvmeRequest *req = ctx->req; NvmeNamespace *ns = req->ns; NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); - NvmeCtrl *n = nvme_ctrl(req); + NvmeState *n = nvme_state(req); NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; uint64_t slba = le64_to_cpu(rw->slba); uint8_t prinfo = NVME_RW_PRINFO(le16_to_cpu(rw->control)); @@ -357,7 +357,7 @@ out: nvme_dif_rw_cb(ctx, ret); } -uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req) +uint16_t nvme_dif_rw(NvmeState *n, NvmeRequest *req) { NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns = req->ns; diff --git a/hw/nvme/dif.h b/hw/nvme/dif.h index 7d47299252ae..53a22bc7c78e 100644 --- a/hw/nvme/dif.h +++ b/hw/nvme/dif.h @@ -48,6 +48,6 @@ uint16_t nvme_dif_check(NvmeNamespaceNvm *nvm, uint8_t *buf, size_t len, uint8_t *mbuf, size_t mlen, uint8_t prinfo, uint64_t slba, uint16_t apptag, uint16_t appmask, uint32_t *reftag); -uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req); +uint16_t nvme_dif_rw(NvmeState *n, NvmeRequest *req); #endif /* HW_NVME_DIF_H */ diff --git a/hw/nvme/ns.c b/hw/nvme/ns.c index b411b184c253..bdd41a3d1fc3 100644 --- a/hw/nvme/ns.c +++ b/hw/nvme/ns.c @@ -493,7 +493,7 @@ static void nvme_nsdev_realize(DeviceState *dev, Error **errp) NvmeNamespaceDevice *nsdev = NVME_NAMESPACE_DEVICE(dev); NvmeNamespace *ns = &nsdev->ns; BusState *s = qdev_get_parent_bus(dev); - NvmeCtrl *n = NVME(s->parent); + NvmeState *n = NVME_STATE(s->parent); NvmeSubsystem *subsys = n->subsys; uint32_t nsid = nsdev->params.nsid; int i; @@ -558,7 +558,7 @@ static void nvme_nsdev_realize(DeviceState *dev, Error **errp) if (nsdev->params.shared) { for (i = 0; i < ARRAY_SIZE(subsys->ctrls); i++) { - NvmeCtrl *ctrl = subsys->ctrls[i]; + NvmeState *ctrl = subsys->ctrls[i]; if (ctrl) { nvme_attach_ns(ctrl, ns); diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index 4ae4b4e5ffe1..980a471e195f 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -30,9 +30,14 @@ QEMU_BUILD_BUG_ON(NVME_MAX_NAMESPACES > NVME_NSID_BROADCAST - 1); -typedef struct NvmeCtrl NvmeCtrl; typedef struct NvmeNamespace NvmeNamespace; +#define TYPE_NVME_STATE "nvme-state" +OBJECT_DECLARE_SIMPLE_TYPE(NvmeState, NVME_STATE) + +#define TYPE_NVME_DEVICE "nvme" +OBJECT_DECLARE_SIMPLE_TYPE(NvmeCtrl, NVME_DEVICE) + #define TYPE_NVME_BUS "nvme-bus" OBJECT_DECLARE_SIMPLE_TYPE(NvmeBus, NVME_BUS) @@ -49,7 +54,7 @@ typedef struct NvmeSubsystem { NvmeBus bus; uint8_t subnqn[256]; - NvmeCtrl *ctrls[NVME_MAX_CONTROLLERS]; + NvmeState *ctrls[NVME_MAX_CONTROLLERS]; NvmeNamespace *namespaces[NVME_MAX_NAMESPACES + 1]; struct { @@ -57,11 +62,11 @@ typedef struct NvmeSubsystem { } params; } NvmeSubsystem; -int nvme_subsys_register_ctrl(NvmeCtrl *n, Error **errp); -void nvme_subsys_unregister_ctrl(NvmeSubsystem *subsys, NvmeCtrl *n); +int nvme_subsys_register_ctrl(NvmeState *n, Error **errp); +void nvme_subsys_unregister_ctrl(NvmeSubsystem *subsys, NvmeState *n); -static inline NvmeCtrl *nvme_subsys_ctrl(NvmeSubsystem *subsys, - uint32_t cntlid) +static inline NvmeState *nvme_subsys_ctrl(NvmeSubsystem *subsys, + uint32_t cntlid) { if (!subsys || cntlid >= NVME_MAX_CONTROLLERS) { return NULL; @@ -338,7 +343,7 @@ static inline const char *nvme_io_opc_str(uint8_t opc) } typedef struct NvmeSQueue { - struct NvmeCtrl *ctrl; + NvmeState *ctrl; uint16_t sqid; uint16_t cqid; uint32_t head; @@ -353,7 +358,7 @@ typedef struct NvmeSQueue { } NvmeSQueue; typedef struct NvmeCQueue { - struct NvmeCtrl *ctrl; + NvmeState *ctrl; uint8_t phase; uint16_t cqid; uint16_t irq_enabled; @@ -367,10 +372,6 @@ typedef struct NvmeCQueue { QTAILQ_HEAD(, NvmeRequest) req_list; } NvmeCQueue; -#define TYPE_NVME "nvme" -#define NVME(obj) \ - OBJECT_CHECK(NvmeCtrl, (obj), TYPE_NVME) - typedef struct NvmeParams { char *serial; uint32_t num_queues; /* deprecated since 5.1 */ @@ -387,13 +388,12 @@ typedef struct NvmeParams { bool legacy_cmb; } NvmeParams; -typedef struct NvmeCtrl { +typedef struct NvmeState { PCIDevice parent_obj; MemoryRegion bar0; MemoryRegion iomem; NvmeBar bar; NvmeParams params; - NvmeBus bus; uint16_t cntlid; bool qs_created; @@ -439,7 +439,6 @@ typedef struct NvmeCtrl { NvmeSubsystem *subsys; - NvmeNamespaceDevice namespace; NvmeNamespace *namespaces[NVME_MAX_NAMESPACES + 1]; NvmeSQueue **sq; NvmeCQueue **cq; @@ -454,9 +453,18 @@ typedef struct NvmeCtrl { }; uint32_t async_config; } features; +} NvmeState; + +typedef struct NvmeCtrl { + NvmeState parent_obj; + + NvmeBus bus; + + /* for use with legacy single namespace (-device nvme,drive=...) setups */ + NvmeNamespaceDevice namespace; } NvmeCtrl; -static inline NvmeNamespace *nvme_ns(NvmeCtrl *n, uint32_t nsid) +static inline NvmeNamespace *nvme_ns(NvmeState *n, uint32_t nsid) { if (!nsid || nsid > NVME_MAX_NAMESPACES) { return NULL; @@ -468,12 +476,12 @@ static inline NvmeNamespace *nvme_ns(NvmeCtrl *n, uint32_t nsid) static inline NvmeCQueue *nvme_cq(NvmeRequest *req) { NvmeSQueue *sq = req->sq; - NvmeCtrl *n = sq->ctrl; + NvmeState *n = sq->ctrl; return n->cq[sq->cqid]; } -static inline NvmeCtrl *nvme_ctrl(NvmeRequest *req) +static inline NvmeState *nvme_state(NvmeRequest *req) { NvmeSQueue *sq = req->sq; return sq->ctrl; @@ -488,13 +496,13 @@ static inline uint16_t nvme_cid(NvmeRequest *req) return le16_to_cpu(req->cqe.cid); } -void nvme_attach_ns(NvmeCtrl *n, NvmeNamespace *ns); -uint16_t nvme_bounce_data(NvmeCtrl *n, uint8_t *ptr, uint32_t len, +void nvme_attach_ns(NvmeState *n, NvmeNamespace *ns); +uint16_t nvme_bounce_data(NvmeState *n, uint8_t *ptr, uint32_t len, NvmeTxDirection dir, NvmeRequest *req); -uint16_t nvme_bounce_mdata(NvmeCtrl *n, uint8_t *ptr, uint32_t len, +uint16_t nvme_bounce_mdata(NvmeState *n, uint8_t *ptr, uint32_t len, NvmeTxDirection dir, NvmeRequest *req); void nvme_rw_complete_cb(void *opaque, int ret); -uint16_t nvme_map_dptr(NvmeCtrl *n, NvmeSg *sg, size_t len, +uint16_t nvme_map_dptr(NvmeState *n, NvmeSg *sg, size_t len, NvmeCmd *cmd); #endif /* HW_NVME_INTERNAL_H */ diff --git a/hw/nvme/subsys.c b/hw/nvme/subsys.c index 93c35950d69d..2442ae83b744 100644 --- a/hw/nvme/subsys.c +++ b/hw/nvme/subsys.c @@ -11,7 +11,7 @@ #include "nvme.h" -int nvme_subsys_register_ctrl(NvmeCtrl *n, Error **errp) +int nvme_subsys_register_ctrl(NvmeState *n, Error **errp) { NvmeSubsystem *subsys = n->subsys; int cntlid; @@ -32,7 +32,7 @@ int nvme_subsys_register_ctrl(NvmeCtrl *n, Error **errp) return cntlid; } -void nvme_subsys_unregister_ctrl(NvmeSubsystem *subsys, NvmeCtrl *n) +void nvme_subsys_unregister_ctrl(NvmeSubsystem *subsys, NvmeState *n) { subsys->ctrls[n->cntlid] = NULL; } From patchwork Tue Sep 14 20:37:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 1528145 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=irrelevant.dk header.i=@irrelevant.dk header.a=rsa-sha256 header.s=fm1 header.b=gdVcnl8F; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=SEqNFerJ; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4H8GB9186vz9sSn for ; Wed, 15 Sep 2021 07:06:56 +1000 (AEST) Received: from localhost ([::1]:56556 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mQFdl-0001VX-Cd for incoming@patchwork.ozlabs.org; Tue, 14 Sep 2021 17:06:53 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55072) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFC4-0005nl-1y; Tue, 14 Sep 2021 16:38:16 -0400 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:56981) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFC1-0004t8-WA; Tue, 14 Sep 2021 16:38:15 -0400 Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.west.internal (Postfix) with ESMTP id E39C132009DF; Tue, 14 Sep 2021 16:38:11 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Tue, 14 Sep 2021 16:38:12 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=Nvv30IUxX1/fX GCuw5r6lH8h3YQHnynoFY0+RMDirUM=; b=gdVcnl8FdnpzT3AROohP2KN5JdVPz FHJBG4ua6x8QG+4uoyybP0R6nFk4U6jfuniEG7LFwTsuc9CBv+Cly4vHgSPyCFjX mnlNESnPghqV7vhMZ/joD2F5UCfylVS/JmwdgFNRLEt97/dZi9p+qLLzFUM4tr35 dSxyMMpUKZBwC+BehcK3hmXYiRQuhNq8wg1OCfhSyMXIBi/KaGC+qsaOF8PZ+Br8 q0cQkRnZKQqhS0pJv6bYdSvl7otihVrVgtP/WECv05QuHqO1MW7OQiGQ7RVHap9b 2yI/e8Ea35oDR42vrIc6P0sO4QLDmUQuSqtuQoOyIPwzMUosgWP6DjP/Q== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=Nvv30IUxX1/fXGCuw5r6lH8h3YQHnynoFY0+RMDirUM=; b=SEqNFerJ eDrDRY7mKOcoXfn49YFo7RO1+vzxWw7MnyL+IwHFTmTHmIqAmiiUHwHKv4gxReOR B0F+qmcvt8i/sEUgGJxdEqBT40BOJuzESK7MMLuc3CrkDApyny6hmfK1lNNx0zMZ +lkxqNy6/0B2rdqpBDSU6xN4N3jclw/Tk8oNBpVJOPvo9QPxrIUHHCi6zdbnEQyH 5WvS0IkBWjUQFT350z0jSfEOhgbpqPPqTIiJIf54ZrXrT6vvalOeTYKRjTH8N3te SC+59Pq/JFuBIpSHbaA1R6+RMntVEM+USO74Vwz+5cApOXPuwYAfeW3wvku2CAJi Ya9GCUyBSyGglA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudegledgudegkecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghu shculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrth htvghrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffev gfeknecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepih htshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 14 Sep 2021 16:38:10 -0400 (EDT) From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH RFC 09/13] hw/nvme: add experimental device x-nvme-ctrl Date: Tue, 14 Sep 2021 22:37:33 +0200 Message-Id: <20210914203737.182571-10-its@irrelevant.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210914203737.182571-1-its@irrelevant.dk> References: <20210914203737.182571-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.19; envelope-from=its@irrelevant.dk; helo=wout3-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , qemu-block@nongnu.org, Klaus Jensen , Markus Armbruster , Klaus Jensen , Hanna Reitz , Stefan Hajnoczi , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen Add a new experimental 'x-nvme-ctrl' device which allows us to get rid of a bunch of legacy options and slightly change others to better use the qdev property system. Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 111 +++++++++++++++++++++++++++++++++++++------------ hw/nvme/nvme.h | 11 ++++- 2 files changed, 93 insertions(+), 29 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index 6a4f07b8d114..ec63338b5bfc 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -6551,8 +6551,28 @@ void nvme_attach_ns(NvmeState *n, NvmeNamespace *ns) static void nvme_realize(PCIDevice *pci_dev, Error **errp) { - NvmeCtrl *ctrl = NVME_DEVICE(pci_dev); - NvmeState *n = NVME_STATE(ctrl); + NvmeState *n = NVME_STATE(pci_dev); + + if (nvme_check_constraints(n, errp)) { + return; + } + + nvme_init_state(n); + if (nvme_init_pci(n, pci_dev, errp)) { + return; + } + + if (nvme_init_subsys(n, errp)) { + return; + } + + nvme_init_ctrl(n, pci_dev); +} + +static void nvme_legacy_realize(PCIDevice *pci_dev, Error **errp) +{ + NvmeState *n = NVME_STATE(pci_dev); + NvmeCtrlLegacyDevice *ctrl = NVME_DEVICE_LEGACY(n); if (ctrl->namespace.blkconf.blk && n->subsys) { error_setg(errp, "subsystem support is unavailable with legacy " @@ -6575,6 +6595,7 @@ static void nvme_realize(PCIDevice *pci_dev, Error **errp) if (nvme_init_subsys(n, errp)) { return; } + nvme_init_ctrl(n, pci_dev); /* setup a namespace if the controller drive property was given */ @@ -6629,24 +6650,40 @@ static Property nvme_state_props[] = { DEFINE_PROP_LINK("subsys", NvmeState, subsys, TYPE_NVME_SUBSYS, NvmeSubsystem *), DEFINE_PROP_STRING("serial", NvmeState, params.serial), - DEFINE_PROP_UINT32("cmb_size_mb", NvmeState, params.cmb_size_mb, 0), - DEFINE_PROP_UINT32("num_queues", NvmeState, params.num_queues, 0), - DEFINE_PROP_UINT32("max_ioqpairs", NvmeState, params.max_ioqpairs, 64), - DEFINE_PROP_UINT16("msix_qsize", NvmeState, params.msix_qsize, 65), DEFINE_PROP_UINT8("aerl", NvmeState, params.aerl, 3), - DEFINE_PROP_UINT32("aer_max_queued", NvmeState, params.aer_max_queued, 64), DEFINE_PROP_UINT8("mdts", NvmeState, params.mdts, 7), - DEFINE_PROP_UINT8("vsl", NvmeState, params.vsl, 7), - DEFINE_PROP_BOOL("use-intel-id", NvmeState, params.use_intel_id, false), DEFINE_PROP_BOOL("legacy-cmb", NvmeState, params.legacy_cmb, false), - DEFINE_PROP_UINT8("zoned.zasl", NvmeState, params.zasl, 0), - DEFINE_PROP_BOOL("zoned.auto_transition", NvmeState, - params.auto_transition_zones, true), DEFINE_PROP_END_OF_LIST(), }; static Property nvme_props[] = { - DEFINE_BLOCK_PROPERTIES(NvmeCtrl, namespace.blkconf), + DEFINE_PROP_UINT32("cmb-size-mb", NvmeState, params.cmb_size_mb, 0), + DEFINE_PROP_UINT32("max-aen-retention", NvmeState, params.aer_max_queued, 64), + DEFINE_PROP_UINT32("max-ioqpairs", NvmeState, params.max_ioqpairs, 64), + DEFINE_PROP_UINT16("msix-vectors", NvmeState, params.msix_qsize, 2048), + + /* nvm command set specific properties */ + DEFINE_PROP_UINT8("nvm-vsl", NvmeState, params.vsl, 7), + + /* zoned command set specific properties */ + DEFINE_PROP_UINT8("zoned-zasl", NvmeState, params.zasl, 0), + DEFINE_PROP_BOOL("zoned-auto-transition-zones", NvmeState, + params.auto_transition_zones, true), + DEFINE_PROP_END_OF_LIST(), +}; + +static Property nvme_legacy_props[] = { + DEFINE_BLOCK_PROPERTIES(NvmeCtrlLegacyDevice, namespace.blkconf), + DEFINE_PROP_UINT32("cmb_size_mb", NvmeState, params.cmb_size_mb, 0), + DEFINE_PROP_UINT32("num_queues", NvmeState, params.num_queues, 0), + DEFINE_PROP_UINT32("aer_max_queued", NvmeState, params.aer_max_queued, 64), + DEFINE_PROP_UINT32("max_ioqpairs", NvmeState, params.max_ioqpairs, 64), + DEFINE_PROP_UINT16("msix_qsize", NvmeState, params.msix_qsize, 65), + DEFINE_PROP_BOOL("use-intel-id", NvmeState, params.use_intel_id, false), + DEFINE_PROP_UINT8("vsl", NvmeState, params.vsl, 7), + DEFINE_PROP_UINT8("zoned.zasl", NvmeState, params.zasl, 0), + DEFINE_PROP_BOOL("zoned.auto_transition", NvmeState, + params.auto_transition_zones, true), DEFINE_PROP_END_OF_LIST(), }; @@ -6702,7 +6739,6 @@ static void nvme_state_class_init(ObjectClass *oc, void *data) DeviceClass *dc = DEVICE_CLASS(oc); PCIDeviceClass *pc = PCI_DEVICE_CLASS(oc); - pc->realize = nvme_realize; pc->exit = nvme_exit; pc->class_id = PCI_CLASS_STORAGE_EXPRESS; pc->revision = 2; @@ -6736,25 +6772,45 @@ static const TypeInfo nvme_state_info = { static void nvme_class_init(ObjectClass *oc, void *data) { DeviceClass *dc = DEVICE_CLASS(oc); + PCIDeviceClass *pc = PCI_DEVICE_CLASS(oc); + + pc->realize = nvme_realize; + device_class_set_props(dc, nvme_props); } -static void nvme_instance_init(Object *obj) -{ - NvmeCtrl *ctrl = NVME_DEVICE(obj); - - device_add_bootindex_property(obj, &ctrl->namespace.blkconf.bootindex, - "bootindex", "/namespace@1,0", - DEVICE(obj)); -} - static const TypeInfo nvme_info = { .name = TYPE_NVME_DEVICE, .parent = TYPE_NVME_STATE, .class_init = nvme_class_init, .instance_size = sizeof(NvmeCtrl), - .instance_init = nvme_instance_init, - .class_init = nvme_class_init, +}; + +static void nvme_legacy_instance_init(Object *obj) +{ + NvmeCtrlLegacyDevice *ctrl = NVME_DEVICE_LEGACY(obj); + + device_add_bootindex_property(obj, &ctrl->namespace.blkconf.bootindex, + "bootindex", "/namespace@1,0", + DEVICE(obj)); +} + +static void nvme_legacy_class_init(ObjectClass *oc, void *data) +{ + DeviceClass *dc = DEVICE_CLASS(oc); + PCIDeviceClass *pc = PCI_DEVICE_CLASS(oc); + + pc->realize = nvme_legacy_realize; + + device_class_set_props(dc, nvme_legacy_props); +} + +static const TypeInfo nvme_legacy_info = { + .name = TYPE_NVME_DEVICE_LEGACY, + .parent = TYPE_NVME_STATE, + .class_init = nvme_legacy_class_init, + .instance_size = sizeof(NvmeCtrlLegacyDevice), + .instance_init = nvme_legacy_instance_init, }; static const TypeInfo nvme_bus_info = { @@ -6763,11 +6819,12 @@ static const TypeInfo nvme_bus_info = { .instance_size = sizeof(NvmeBus), }; -static void nvme_register_types(void) +static void register_types(void) { type_register_static(&nvme_state_info); type_register_static(&nvme_info); + type_register_static(&nvme_legacy_info); type_register_static(&nvme_bus_info); } -type_init(nvme_register_types) +type_init(register_types) diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index 980a471e195f..629a8ccab9f8 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -35,9 +35,12 @@ typedef struct NvmeNamespace NvmeNamespace; #define TYPE_NVME_STATE "nvme-state" OBJECT_DECLARE_SIMPLE_TYPE(NvmeState, NVME_STATE) -#define TYPE_NVME_DEVICE "nvme" +#define TYPE_NVME_DEVICE "x-nvme-ctrl" OBJECT_DECLARE_SIMPLE_TYPE(NvmeCtrl, NVME_DEVICE) +#define TYPE_NVME_DEVICE_LEGACY "nvme" +OBJECT_DECLARE_SIMPLE_TYPE(NvmeCtrlLegacyDevice, NVME_DEVICE_LEGACY) + #define TYPE_NVME_BUS "nvme-bus" OBJECT_DECLARE_SIMPLE_TYPE(NvmeBus, NVME_BUS) @@ -457,12 +460,16 @@ typedef struct NvmeState { typedef struct NvmeCtrl { NvmeState parent_obj; +} NvmeCtrl; + +typedef struct NvmeCtrlLegacyDevice { + NvmeState parent_obj; NvmeBus bus; /* for use with legacy single namespace (-device nvme,drive=...) setups */ NvmeNamespaceDevice namespace; -} NvmeCtrl; +} NvmeCtrlLegacyDevice; static inline NvmeNamespace *nvme_ns(NvmeState *n, uint32_t nsid) { From patchwork Tue Sep 14 20:37:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 1528148 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=irrelevant.dk header.i=@irrelevant.dk header.a=rsa-sha256 header.s=fm1 header.b=CbA0733j; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=BCD3K14f; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4H8GK351Fyz9sRN for ; Wed, 15 Sep 2021 07:12:54 +1000 (AEST) Received: from localhost ([::1]:34078 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mQFjX-0006ME-Qx for incoming@patchwork.ozlabs.org; Tue, 14 Sep 2021 17:12:52 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55092) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFC6-0005u1-W3; Tue, 14 Sep 2021 16:38:19 -0400 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:57923) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFC4-0004vF-Oa; Tue, 14 Sep 2021 16:38:18 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.west.internal (Postfix) with ESMTP id BE7C932009E0; Tue, 14 Sep 2021 16:38:14 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Tue, 14 Sep 2021 16:38:15 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=9RvYsixCeIiSU fyMC0esy/uw/E9o+2NOz+04V4BTEqQ=; b=CbA0733jJqyKH3KIVevffP23Slbip mXjYD5fndlkW4J97P5y6LZg877riY64evrnY1k6kIMmHorI71IKebkczPdZ8/M9P GjVFxsaEO7YQir/ktz0nCcG4frhQwJ2MxFjtgvzd3bZt3Kn0vArNHJcOtc2OZDRp Zj+cdEr4DI6a/UzbyD2OIeDP5wQeJvawpDcNmnIE6bGKY4swu13v4bPgTS0zeCxx Ua8YvFizeBFWz+J7sZB7Ds5nQcfZkzuX0BUIQPB315cWWlaZOzBpeSbAa0Ow91wZ GAOKxSQKCgot/vVZkCH9SQcFIl5+hBjUc7Fmgq5nk5OBS0wBxcLCpjmkA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=9RvYsixCeIiSUfyMC0esy/uw/E9o+2NOz+04V4BTEqQ=; b=BCD3K14f q4RfcWlIJQ9MQTKqf0DgxlTHZ5/qe13AAhbO6hUM07y6HWykLR2/WCeGNi1xhXd8 AD4GMFJ4Y9x91mJ9F1S/Y+oJ3MasEqvctdOx6rC5BdUzcqf20vwa9OLTcRfpv7wN lKOv7Ao/EqWDoNxTcIZn+bIHVdWYSX9qrIZVFWnyCn8oTZxAhMnYVTTUzH6kvDV6 hkYJ/mk45EYmQQVI2iU+0N7YlpveJYundysPXKfcwLfm19akLuimGMt0p9d8uQyW 1aFDHopvzoYy7R5KvhVrwf+/Odsd6ObEdE0mwiADnTikMxdZHLzxszG39Bwp92C+ oOaKfhxjAPIzvw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudegledgudegjecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghu shculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrth htvghrnhepteehhfevffejieeiieehkedtkeejgfevveejgefftefhteeigfdvvedtuddu geegnecuffhomhgrihhnpedvtddvuddqtdekrdhorhhgpddvtdduledqtdekrdhorhhgne cuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepihhtshes ihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 14 Sep 2021 16:38:12 -0400 (EDT) From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH RFC 10/13] hw/nvme: add experimental object x-nvme-subsystem Date: Tue, 14 Sep 2021 22:37:34 +0200 Message-Id: <20210914203737.182571-11-its@irrelevant.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210914203737.182571-1-its@irrelevant.dk> References: <20210914203737.182571-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.19; envelope-from=its@irrelevant.dk; helo=wout3-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , qemu-block@nongnu.org, Klaus Jensen , Markus Armbruster , Klaus Jensen , Hanna Reitz , Stefan Hajnoczi , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen Add a basic user creatable object that models an NVMe NVM subsystem. Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 26 +++++++--- hw/nvme/ns.c | 5 +- hw/nvme/nvme.h | 30 ++++++++---- hw/nvme/subsys.c | 121 +++++++++++++++++++++++++++++++++++++++-------- qapi/qom.json | 17 +++++++ 5 files changed, 162 insertions(+), 37 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index ec63338b5bfc..563a8f8ad1df 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -6526,7 +6526,7 @@ static int nvme_init_subsys(NvmeState *n, Error **errp) return 0; } - cntlid = nvme_subsys_register_ctrl(n, errp); + cntlid = nvme_subsys_register_ctrl(n->subsys, n, errp); if (cntlid < 0) { return -1; } @@ -6557,6 +6557,12 @@ static void nvme_realize(PCIDevice *pci_dev, Error **errp) return; } + if (!n->subsys) { + error_setg(errp, "device '%s' requires the 'subsys' parameter", + TYPE_NVME_DEVICE); + return; + } + nvme_init_state(n); if (nvme_init_pci(n, pci_dev, errp)) { return; @@ -6574,10 +6580,14 @@ static void nvme_legacy_realize(PCIDevice *pci_dev, Error **errp) NvmeState *n = NVME_STATE(pci_dev); NvmeCtrlLegacyDevice *ctrl = NVME_DEVICE_LEGACY(n); - if (ctrl->namespace.blkconf.blk && n->subsys) { - error_setg(errp, "subsystem support is unavailable with legacy " - "namespace ('drive' property)"); - return; + if (ctrl->subsys_dev) { + if (ctrl->namespace.blkconf.blk) { + error_setg(errp, "subsystem support is unavailable with legacy " + "namespace ('drive' property)"); + return; + } + + n->subsys = &ctrl->subsys_dev->subsys; } if (nvme_check_constraints(n, errp)) { @@ -6647,8 +6657,6 @@ static void nvme_exit(PCIDevice *pci_dev) static Property nvme_state_props[] = { DEFINE_PROP_LINK("pmrdev", NvmeState, pmr.dev, TYPE_MEMORY_BACKEND, HostMemoryBackend *), - DEFINE_PROP_LINK("subsys", NvmeState, subsys, TYPE_NVME_SUBSYS, - NvmeSubsystem *), DEFINE_PROP_STRING("serial", NvmeState, params.serial), DEFINE_PROP_UINT8("aerl", NvmeState, params.aerl, 3), DEFINE_PROP_UINT8("mdts", NvmeState, params.mdts, 7), @@ -6657,6 +6665,8 @@ static Property nvme_state_props[] = { }; static Property nvme_props[] = { + DEFINE_PROP_LINK("subsys", NvmeState, subsys, TYPE_NVME_SUBSYSTEM, + NvmeSubsystem *), DEFINE_PROP_UINT32("cmb-size-mb", NvmeState, params.cmb_size_mb, 0), DEFINE_PROP_UINT32("max-aen-retention", NvmeState, params.aer_max_queued, 64), DEFINE_PROP_UINT32("max-ioqpairs", NvmeState, params.max_ioqpairs, 64), @@ -6674,6 +6684,8 @@ static Property nvme_props[] = { static Property nvme_legacy_props[] = { DEFINE_BLOCK_PROPERTIES(NvmeCtrlLegacyDevice, namespace.blkconf), + DEFINE_PROP_LINK("subsys", NvmeCtrlLegacyDevice, subsys_dev, + TYPE_NVME_SUBSYS_DEVICE, NvmeSubsystemDevice *), DEFINE_PROP_UINT32("cmb_size_mb", NvmeState, params.cmb_size_mb, 0), DEFINE_PROP_UINT32("num_queues", NvmeState, params.num_queues, 0), DEFINE_PROP_UINT32("aer_max_queued", NvmeState, params.aer_max_queued, 64), diff --git a/hw/nvme/ns.c b/hw/nvme/ns.c index bdd41a3d1fc3..3d643554644c 100644 --- a/hw/nvme/ns.c +++ b/hw/nvme/ns.c @@ -493,7 +493,8 @@ static void nvme_nsdev_realize(DeviceState *dev, Error **errp) NvmeNamespaceDevice *nsdev = NVME_NAMESPACE_DEVICE(dev); NvmeNamespace *ns = &nsdev->ns; BusState *s = qdev_get_parent_bus(dev); - NvmeState *n = NVME_STATE(s->parent); + NvmeCtrlLegacyDevice *ctrl = NVME_DEVICE_LEGACY(s->parent); + NvmeState *n = NVME_STATE(ctrl); NvmeSubsystem *subsys = n->subsys; uint32_t nsid = nsdev->params.nsid; int i; @@ -515,7 +516,7 @@ static void nvme_nsdev_realize(DeviceState *dev, Error **errp) * If this namespace belongs to a subsystem (through a link on the * controller device), reparent the device. */ - if (!qdev_set_parent_bus(dev, &subsys->bus.parent_bus, errp)) { + if (!qdev_set_parent_bus(dev, &ctrl->subsys_dev->bus.parent_bus, errp)) { return; } } diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index 629a8ccab9f8..1ae185139132 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -48,24 +48,36 @@ typedef struct NvmeBus { BusState parent_bus; } NvmeBus; -#define TYPE_NVME_SUBSYS "nvme-subsys" -#define NVME_SUBSYS(obj) \ - OBJECT_CHECK(NvmeSubsystem, (obj), TYPE_NVME_SUBSYS) +#define TYPE_NVME_SUBSYSTEM "x-nvme-subsystem" +OBJECT_DECLARE_SIMPLE_TYPE(NvmeSubsystem, NVME_SUBSYSTEM) typedef struct NvmeSubsystem { - DeviceState parent_obj; - NvmeBus bus; - uint8_t subnqn[256]; + Object parent_obj; + + QemuUUID uuid; + uint8_t subnqn[256]; NvmeState *ctrls[NVME_MAX_CONTROLLERS]; NvmeNamespace *namespaces[NVME_MAX_NAMESPACES + 1]; +} NvmeSubsystem; + +#define TYPE_NVME_SUBSYS_DEVICE "nvme-subsys" +#define NVME_SUBSYS_DEVICE(obj) \ + OBJECT_CHECK(NvmeSubsystemDevice, (obj), TYPE_NVME_SUBSYS_DEVICE) + +typedef struct NvmeSubsystemDevice { + DeviceState parent_obj; + NvmeBus bus; + + NvmeSubsystem subsys; struct { char *nqn; } params; -} NvmeSubsystem; +} NvmeSubsystemDevice; -int nvme_subsys_register_ctrl(NvmeState *n, Error **errp); +int nvme_subsys_register_ctrl(NvmeSubsystem *subsys, NvmeState *n, + Error **errp); void nvme_subsys_unregister_ctrl(NvmeSubsystem *subsys, NvmeState *n); static inline NvmeState *nvme_subsys_ctrl(NvmeSubsystem *subsys, @@ -469,6 +481,8 @@ typedef struct NvmeCtrlLegacyDevice { /* for use with legacy single namespace (-device nvme,drive=...) setups */ NvmeNamespaceDevice namespace; + + NvmeSubsystemDevice *subsys_dev; } NvmeCtrlLegacyDevice; static inline NvmeNamespace *nvme_ns(NvmeState *n, uint32_t nsid) diff --git a/hw/nvme/subsys.c b/hw/nvme/subsys.c index 2442ae83b744..26e8e087e986 100644 --- a/hw/nvme/subsys.c +++ b/hw/nvme/subsys.c @@ -8,12 +8,12 @@ #include "qemu/osdep.h" #include "qapi/error.h" +#include "qom/object_interfaces.h" #include "nvme.h" -int nvme_subsys_register_ctrl(NvmeState *n, Error **errp) +int nvme_subsys_register_ctrl(NvmeSubsystem *subsys, NvmeState *n, Error **errp) { - NvmeSubsystem *subsys = n->subsys; int cntlid; for (cntlid = 0; cntlid < ARRAY_SIZE(subsys->ctrls); cntlid++) { @@ -37,53 +37,134 @@ void nvme_subsys_unregister_ctrl(NvmeSubsystem *subsys, NvmeState *n) subsys->ctrls[n->cntlid] = NULL; } -static void nvme_subsys_setup(NvmeSubsystem *subsys) +static char *get_subnqn(Object *obj, Error **errp) { - const char *nqn = subsys->params.nqn ? - subsys->params.nqn : subsys->parent_obj.id; + NvmeSubsystem *subsys = NVME_SUBSYSTEM(obj); + return g_strdup((char *)subsys->subnqn); +} + +static void set_subnqn(Object *obj, const char *str, Error **errp) +{ + NvmeSubsystem *subsys = NVME_SUBSYSTEM(obj); + snprintf((char *)subsys->subnqn, sizeof(subsys->subnqn), "%s", str); +} + +static char *get_uuid(Object *obj, Error **errp) +{ + NvmeSubsystem *subsys = NVME_SUBSYSTEM(obj); + char buf[UUID_FMT_LEN + 1]; + + qemu_uuid_unparse(&subsys->uuid, buf); + + return g_strdup(buf); +} + +static void set_uuid(Object *obj, const char *str, Error **errp) +{ + NvmeSubsystem *subsys = NVME_SUBSYSTEM(obj); + + if (!strcmp(str, "auto")) { + qemu_uuid_generate(&subsys->uuid); + } else if (qemu_uuid_parse(str, &subsys->uuid) < 0) { + error_setg(errp, "invalid uuid"); + return; + } +} + +static void subsys_complete(UserCreatable *uc, Error **errp) +{ + NvmeSubsystem *subsys = NVME_SUBSYSTEM(uc); + + if (qemu_uuid_is_null(&subsys->uuid)) { + qemu_uuid_generate(&subsys->uuid); + } + + if (strcmp((char *)subsys->subnqn, "")) { + char buf[UUID_FMT_LEN + 1]; + + qemu_uuid_unparse(&subsys->uuid, buf); + + snprintf((char *)subsys->subnqn, sizeof(subsys->subnqn), + "nqn.2021-08.org.qemu:uuid:%s", buf); + } +} + +static void subsys_class_init(ObjectClass *oc, void *data) +{ + UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc); + ucc->complete = subsys_complete; + + object_class_property_add_str(oc, "subnqn", get_subnqn, set_subnqn); + object_class_property_set_description(oc, "subnqn", "the NVM Subsystem " + "NVMe Qualified Name; " + "(default: nqn.2021-08.org.qemu:uuid:)"); + + object_class_property_add_str(oc, "uuid", get_uuid, set_uuid); + object_class_property_set_description(oc, "subnqn", "NVM Subsystem UUID " + "(\"auto\" for random value; " + "default: auto)"); +} + +static const TypeInfo subsys_info = { + .name = TYPE_NVME_SUBSYSTEM, + .parent = TYPE_OBJECT, + .class_init = subsys_class_init, + .instance_size = sizeof(NvmeSubsystem), + .interfaces = (InterfaceInfo[]) { + { TYPE_USER_CREATABLE }, + { }, + } +}; + +static void subsys_dev_setup(NvmeSubsystemDevice *dev) +{ + NvmeSubsystem *subsys = &dev->subsys; + const char *nqn = dev->params.nqn ? + dev->params.nqn : dev->parent_obj.id; snprintf((char *)subsys->subnqn, sizeof(subsys->subnqn), "nqn.2019-08.org.qemu:%s", nqn); } -static void nvme_subsys_realize(DeviceState *dev, Error **errp) +static void subsys_dev_realize(DeviceState *dev, Error **errp) { - NvmeSubsystem *subsys = NVME_SUBSYS(dev); + NvmeSubsystemDevice *subsys = NVME_SUBSYS_DEVICE(dev); qbus_create_inplace(&subsys->bus, sizeof(NvmeBus), TYPE_NVME_BUS, dev, dev->id); - nvme_subsys_setup(subsys); + subsys_dev_setup(subsys); } -static Property nvme_subsystem_props[] = { - DEFINE_PROP_STRING("nqn", NvmeSubsystem, params.nqn), +static Property subsys_dev_props[] = { + DEFINE_PROP_STRING("nqn", NvmeSubsystemDevice, params.nqn), DEFINE_PROP_END_OF_LIST(), }; -static void nvme_subsys_class_init(ObjectClass *oc, void *data) +static void subsys_dev_class_init(ObjectClass *oc, void *data) { DeviceClass *dc = DEVICE_CLASS(oc); set_bit(DEVICE_CATEGORY_STORAGE, dc->categories); - dc->realize = nvme_subsys_realize; + dc->realize = subsys_dev_realize; dc->desc = "Virtual NVMe subsystem"; dc->hotpluggable = false; - device_class_set_props(dc, nvme_subsystem_props); + device_class_set_props(dc, subsys_dev_props); } -static const TypeInfo nvme_subsys_info = { - .name = TYPE_NVME_SUBSYS, +static const TypeInfo subsys_dev_info = { + .name = TYPE_NVME_SUBSYS_DEVICE, .parent = TYPE_DEVICE, - .class_init = nvme_subsys_class_init, - .instance_size = sizeof(NvmeSubsystem), + .class_init = subsys_dev_class_init, + .instance_size = sizeof(NvmeSubsystemDevice), }; -static void nvme_subsys_register_types(void) +static void register_types(void) { - type_register_static(&nvme_subsys_info); + type_register_static(&subsys_info); + type_register_static(&subsys_dev_info); } -type_init(nvme_subsys_register_types) +type_init(register_types) diff --git a/qapi/qom.json b/qapi/qom.json index 6d5f4a88e644..8d7c968fbd88 100644 --- a/qapi/qom.json +++ b/qapi/qom.json @@ -647,6 +647,21 @@ '*hugetlbsize': 'size', '*seal': 'bool' } } +## +# @NvmeSubsystemProperties: +# +# Properties for nvme-subsys objects. +# +# @subnqn: the NVM Subsystem NVMe Qualified Name +# +# @uuid: the UUID of the subsystem. Used as default in subnqn. +# +# Since: 6.1 +## +{ 'struct': 'NvmeSubsystemProperties', + 'data': { 'subnqn': 'str', + '*uuid': 'str' } } + ## # @PrManagerHelperProperties: # @@ -797,6 +812,7 @@ { 'name': 'memory-backend-memfd', 'if': 'defined(CONFIG_LINUX)' }, 'memory-backend-ram', + 'x-nvme-subsystem', 'pef-guest', 'pr-manager-helper', 'qtest', @@ -855,6 +871,7 @@ 'memory-backend-memfd': { 'type': 'MemoryBackendMemfdProperties', 'if': 'defined(CONFIG_LINUX)' }, 'memory-backend-ram': 'MemoryBackendProperties', + 'x-nvme-subsystem': 'NvmeSubsystemProperties', 'pr-manager-helper': 'PrManagerHelperProperties', 'qtest': 'QtestProperties', 'rng-builtin': 'RngProperties', From patchwork Tue Sep 14 20:37:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 1528146 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=irrelevant.dk header.i=@irrelevant.dk header.a=rsa-sha256 header.s=fm1 header.b=ZxTkSVC5; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=bKRtfjRd; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4H8GDh2KMDz9sSn for ; Wed, 15 Sep 2021 07:09:07 +1000 (AEST) Received: from localhost ([::1]:58988 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mQFfs-0003wY-Kh for incoming@patchwork.ozlabs.org; Tue, 14 Sep 2021 17:09:04 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55116) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFCD-00061Z-4C; Tue, 14 Sep 2021 16:38:25 -0400 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:41285) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFC7-0004xL-KV; Tue, 14 Sep 2021 16:38:24 -0400 Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.west.internal (Postfix) with ESMTP id 9D25D3200A10; Tue, 14 Sep 2021 16:38:17 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Tue, 14 Sep 2021 16:38:18 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=UwrMCHWgGSJXX 3nwkrDPViKvRqfzJfTK6VGcTx0wI2s=; b=ZxTkSVC5Ugl3oHFOIdQPqZ/k1llY1 +Y4abuaY1bhnz1/1t7ufo436ESdSKoGcqWx5hlwd10HGsrnDVOTYG7z8cdGFqpYV DrFz+L18DMhELh+J9u2D2nnTzNHaUro/CKEGubYOWbcstJTIJ8zY2qLE6P7xIybh 96ZusbPKv3EJopnSUOw97WPN7CJ+rAVP8pze2jo9Q4mKAQBRkZGz5nJSXsrtJ6Ry eb3o4pszFLrkoPmC3Ett8ShQw9f8G8iV199AtyTZysmgvd3b2T0b3qWIwxMxyAMa vYSnu3biKhTbo7NLrbuB2x38ObKAYciL54Z7oPghc8c6DEDuhQAFMnoDA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=UwrMCHWgGSJXX3nwkrDPViKvRqfzJfTK6VGcTx0wI2s=; b=bKRtfjRd AWkjF7Ts9Qmz6+bT5USQasNzNGK7PXxq7BJIsbVaRswflLte3YSNH1Wl8gChAOM6 U+P41GiZrT4f1smo1cwEF9JqNBUXvVE1hKElgWOpE86tF+unZzFKyLYkiOxyBm0L /fj5jvWir8oAOK8JF0IYT1TSEOpxKQldWfx4BtpMxE9ZB4YgPhrW0wGlQRFxzXJY kmuZXYYUmLzqX0C+zndvr9Ey7Hn8iGVWw3R8fPS/zEqX4EHYJTTZVN8EvDNdHYNu dQcCsA6b55UBDtVPfhcLKEV3+WZPNAJk+aiXcubKAedBvIcEuTWWmHgPS9fGuH1I YRhARjYYQS4tig== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudegledgudegkecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghu shculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrth htvghrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffev gfeknecuvehluhhsthgvrhfuihiivgepudenucfrrghrrghmpehmrghilhhfrhhomhepih htshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 14 Sep 2021 16:38:15 -0400 (EDT) From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH RFC 11/13] hw/nvme: add experimental abstract object x-nvme-ns Date: Tue, 14 Sep 2021 22:37:35 +0200 Message-Id: <20210914203737.182571-12-its@irrelevant.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210914203737.182571-1-its@irrelevant.dk> References: <20210914203737.182571-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.19; envelope-from=its@irrelevant.dk; helo=wout3-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , qemu-block@nongnu.org, Klaus Jensen , Markus Armbruster , Klaus Jensen , Hanna Reitz , Stefan Hajnoczi , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen Add the abstract NvmeNamespace object to base proper namespace types on. Signed-off-by: Klaus Jensen --- hw/nvme/ns.c | 286 +++++++++++++++++++++++++++++++++++++++++++++++ hw/nvme/nvme.h | 24 ++++ hw/nvme/subsys.c | 31 +++++ qapi/qom.json | 18 +++ 4 files changed, 359 insertions(+) diff --git a/hw/nvme/ns.c b/hw/nvme/ns.c index 3d643554644c..05828fbb48a5 100644 --- a/hw/nvme/ns.c +++ b/hw/nvme/ns.c @@ -13,9 +13,13 @@ */ #include "qemu/osdep.h" +#include "qemu/cutils.h" +#include "qemu/ctype.h" #include "qemu/units.h" #include "qemu/error-report.h" #include "qapi/error.h" +#include "qapi/qapi-builtin-visit.h" +#include "qom/object_interfaces.h" #include "sysemu/sysemu.h" #include "sysemu/block-backend.h" @@ -638,8 +642,290 @@ static const TypeInfo nvme_nsdev_info = { .instance_init = nvme_nsdev_instance_init, }; +bool nvme_ns_prop_writable(Object *obj, const char *name, Error **errp) +{ + NvmeNamespace *ns = NVME_NAMESPACE(obj); + + if (ns->realized) { + error_setg(errp, "attempt to set immutable property '%s' on " + "active namespace", name); + return false; + } + + return true; +} + +static void set_attached(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + NvmeNamespace *ns = NVME_NAMESPACE(obj); + + if (!nvme_ns_prop_writable(obj, name, errp)) { + return; + } + + visit_type_strList(v, name, &ns->_ctrls, errp); +} + +static void get_attached(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + NvmeNamespace *ns = NVME_NAMESPACE(obj); + strList *paths = NULL; + strList **tail = &paths; + int cntlid; + + for (cntlid = 0; cntlid < ARRAY_SIZE(ns->subsys->ctrls); cntlid++) { + NvmeState *ctrl = nvme_subsys_ctrl(ns->subsys, cntlid); + if (!ctrl || !nvme_ns(ctrl, ns->nsid)) { + continue; + } + + QAPI_LIST_APPEND(tail, object_get_canonical_path(OBJECT(ctrl))); + } + + visit_type_strList(v, name, &paths, errp); + qapi_free_strList(paths); +} + +static void get_nsid(Object *obj, Visitor *v, const char *name, void *opaque, + Error **errp) +{ + NvmeNamespace *ns = NVME_NAMESPACE(obj); + uint32_t value = ns->nsid; + + visit_type_uint32(v, name, &value, errp); +} + +static void set_nsid(Object *obj, Visitor *v, const char *name, void *opaque, + Error **errp) +{ + NvmeNamespace *ns = NVME_NAMESPACE(obj); + uint32_t value; + + if (!nvme_ns_prop_writable(obj, name, errp)) { + return; + } + + if (!visit_type_uint32(v, name, &value, errp)) { + return; + } + + if (value > NVME_MAX_NAMESPACES) { + error_setg(errp, "invalid namespace identifier"); + return; + } + + ns->nsid = value; +} + +static char *get_uuid(Object *obj, Error **errp) +{ + NvmeNamespace *ns = NVME_NAMESPACE(obj); + + char *str = g_malloc(UUID_FMT_LEN + 1); + + qemu_uuid_unparse(&ns->uuid, str); + + return str; +} + +static void set_uuid(Object *obj, const char *v, Error **errp) +{ + NvmeNamespace *ns = NVME_NAMESPACE(obj); + + if (!nvme_ns_prop_writable(obj, "uuid", errp)) { + return; + } + + if (!strcmp(v, "auto")) { + qemu_uuid_generate(&ns->uuid); + } else if (qemu_uuid_parse(v, &ns->uuid) < 0) { + error_setg(errp, "invalid uuid"); + } +} + +static char *get_eui64(Object *obj, Error **errp) +{ + NvmeNamespace *ns = NVME_NAMESPACE(obj); + + const int len = 2 * 8 + 7 + 1; /* "aa:bb:cc:dd:ee:ff:gg:hh\0" */ + char *str = g_malloc(len); + + snprintf(str, len, "%02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x", + ns->eui64.a[0], ns->eui64.a[1], ns->eui64.a[2], ns->eui64.a[3], + ns->eui64.a[4], ns->eui64.a[5], ns->eui64.a[6], ns->eui64.a[7]); + + return str; +} + +static void set_eui64(Object *obj, const char *v, Error **errp) +{ + NvmeNamespace *ns = NVME_NAMESPACE(obj); + + int i, pos; + + if (!nvme_ns_prop_writable(obj, "eui64", errp)) { + return; + } + + if (!strcmp(v, "auto")) { + ns->eui64.a[0] = 0x52; + ns->eui64.a[1] = 0x54; + ns->eui64.a[2] = 0x00; + + for (i = 0; i < 5; ++i) { + ns->eui64.a[3 + i] = g_random_int(); + } + + return; + } + + for (i = 0, pos = 0; i < 8; i++, pos += 3) { + long octet; + + if (!(qemu_isxdigit(v[pos]) && qemu_isxdigit(v[pos + 1]))) { + goto invalid; + } + + if (i == 7) { + if (v[pos + 2] != '\0') { + goto invalid; + } + } else { + if (!(v[pos + 2] == ':' || v[pos + 2] == '-')) { + goto invalid; + } + } + + if (qemu_strtol(v + pos, NULL, 16, &octet) < 0 || octet > 0xff) { + goto invalid; + } + + ns->eui64.a[i] = octet; + } + + return; + +invalid: + error_setg(errp, "invalid ieee extended unique identifier"); +} + +static void set_ns_identifiers_if_unset(NvmeNamespace *ns) +{ + if (!ns->eui64.v) { + ns->eui64.a[0] = 0x52; + ns->eui64.a[1] = 0x54; + ns->eui64.a[2] = 0x00; + ns->eui64.a[4] = ns->nsid >> 24; + ns->eui64.a[5] = ns->nsid >> 16; + ns->eui64.a[6] = ns->nsid >> 8; + ns->eui64.a[7] = ns->nsid; + } + + ns->nguid.eui = ns->eui64.v; +} + +static void nvme_ns_machine_done(Notifier *notifier, void *data) +{ + NvmeNamespace *ns = container_of(notifier, NvmeNamespace, machine_done); + NvmeNamespaceClass *nc = NVME_NAMESPACE_GET_CLASS(ns); + Error *err = NULL; + + if (nvme_subsys_register_ns(ns->subsys, ns, &err)) { + error_report_err(err); + return; + } + + if (nc->configure && nc->configure(ns, &err)) { + error_report_err(err); + return; + } + + for (strList *l = ns->_ctrls; l; l = l->next) { + Object *obj = object_resolve_path_type(l->value, TYPE_NVME_DEVICE, NULL); + if (!obj) { + error_report("controller '%s' not found", l->value); + continue; + } + + nvme_attach_ns(NVME_STATE(obj), ns); + } + + qapi_free_strList(ns->_ctrls); +} + +static void nvme_ns_complete(UserCreatable *uc, Error **errp) +{ + NvmeNamespace *ns = NVME_NAMESPACE(uc); + NvmeNamespaceClass *nc = NVME_NAMESPACE_GET_CLASS(ns); + + set_ns_identifiers_if_unset(ns); + + ns->flags |= NVME_NS_SHARED; + + if (nc->check_params && nc->check_params(ns, errp)) { + return; + } + + ns->machine_done.notify = nvme_ns_machine_done; + qemu_add_machine_init_done_notifier(&ns->machine_done); + + ns->realized = true; +} + +static void nvme_ns_class_init(ObjectClass *oc, void *data) +{ + UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc); + + ucc->complete = nvme_ns_complete; + + object_class_property_add(oc, "nsid", "uint32", get_nsid, set_nsid, NULL, + NULL); + object_class_property_set_description(oc, "nsid", "namespace identifier " + "(\"auto\": assigned by controller " + "or subsystem; default: auto)"); + + object_class_property_add_link(oc, "subsys", TYPE_NVME_SUBSYSTEM, + offsetof(NvmeNamespace, subsys), + object_property_allow_set_link, 0); + object_class_property_set_description(oc, "subsys", "link to " + "x-nvme-subsystem object"); + + object_class_property_add(oc, "attached-ctrls", "str", get_attached, + set_attached, NULL, NULL); + object_class_property_set_description(oc, "attached-ctrls", "list of " + "controllers attached to the " + "namespace"); + + object_class_property_add_str(oc, "uuid", get_uuid, set_uuid); + object_class_property_set_description(oc, "uuid", "namespace uuid " + "(\"auto\" for random value; " + "default: 0)"); + + object_class_property_add_str(oc, "eui64", get_eui64, set_eui64); + object_class_property_set_description(oc, "eui64", "IEEE Extended Unique " + "Identifier (\"auto\" for random " + "value; " + "default: \"52:54:00:\")"); +} + +static const TypeInfo nvme_ns_info = { + .name = TYPE_NVME_NAMESPACE, + .parent = TYPE_OBJECT, + .abstract = true, + .instance_size = sizeof(NvmeNamespace), + .class_size = sizeof(NvmeNamespaceClass), + .class_init = nvme_ns_class_init, + .interfaces = (InterfaceInfo[]) { + { TYPE_USER_CREATABLE }, + { }, + }, +}; + static void register_types(void) { + type_register_static(&nvme_ns_info); type_register_static(&nvme_nsdev_info); } diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index 1ae185139132..2a623b9eecd6 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -19,6 +19,8 @@ #define HW_NVME_INTERNAL_H #include "qemu/uuid.h" +#include "qemu/notify.h" +#include "qapi/qapi-builtin-visit.h" #include "hw/pci/pci.h" #include "hw/block/block.h" @@ -48,6 +50,16 @@ typedef struct NvmeBus { BusState parent_bus; } NvmeBus; +#define TYPE_NVME_NAMESPACE "x-nvme-ns" +OBJECT_DECLARE_TYPE(NvmeNamespace, NvmeNamespaceClass, NVME_NAMESPACE) + +struct NvmeNamespaceClass { + ObjectClass parent_class; + + int (*check_params)(NvmeNamespace *ns, Error **errp); + int (*configure)(NvmeNamespace *ns, Error **errp); +}; + #define TYPE_NVME_SUBSYSTEM "x-nvme-subsystem" OBJECT_DECLARE_SIMPLE_TYPE(NvmeSubsystem, NVME_SUBSYSTEM) @@ -79,6 +91,8 @@ typedef struct NvmeSubsystemDevice { int nvme_subsys_register_ctrl(NvmeSubsystem *subsys, NvmeState *n, Error **errp); void nvme_subsys_unregister_ctrl(NvmeSubsystem *subsys, NvmeState *n); +int nvme_subsys_register_ns(NvmeSubsystem *subsys, NvmeNamespace *ns, + Error **errp); static inline NvmeState *nvme_subsys_ctrl(NvmeSubsystem *subsys, uint32_t cntlid) @@ -194,6 +208,14 @@ enum NvmeNamespaceFlags { }; typedef struct NvmeNamespace { + Object parent_obj; + bool realized; + + Notifier machine_done; + + strList *_ctrls; + NvmeSubsystem *subsys; + uint32_t nsid; uint8_t csi; QemuUUID uuid; @@ -217,6 +239,8 @@ typedef struct NvmeNamespace { NvmeNamespaceZoned zoned; } NvmeNamespace; +bool nvme_ns_prop_writable(Object *obj, const char *name, Error **errp); + #define NVME_NAMESPACE_NVM(ns) (&(ns)->nvm) #define NVME_NAMESPACE_ZONED(ns) (&(ns)->zoned) diff --git a/hw/nvme/subsys.c b/hw/nvme/subsys.c index 26e8e087e986..f1fcd7f3980e 100644 --- a/hw/nvme/subsys.c +++ b/hw/nvme/subsys.c @@ -37,6 +37,37 @@ void nvme_subsys_unregister_ctrl(NvmeSubsystem *subsys, NvmeState *n) subsys->ctrls[n->cntlid] = NULL; } +int nvme_subsys_register_ns(NvmeSubsystem *subsys, NvmeNamespace *ns, + Error **errp) +{ + int i; + + if (!ns->nsid) { + for (i = 1; i <= NVME_MAX_NAMESPACES; i++) { + if (!subsys->namespaces[i]) { + ns->nsid = i; + break; + } + } + + if (!ns->nsid) { + error_setg(errp, "no free namespace identifiers"); + return -1; + } + } else if (ns->nsid > NVME_MAX_NAMESPACES) { + error_setg(errp, "invalid namespace identifier '%d'", ns->nsid); + return -1; + } else if (subsys->namespaces[ns->nsid]) { + error_setg(errp, "namespace identifier '%d' already allocated", + ns->nsid); + return -1; + } + + subsys->namespaces[ns->nsid] = ns; + + return 0; +} + static char *get_subnqn(Object *obj, Error **errp) { NvmeSubsystem *subsys = NVME_SUBSYSTEM(obj); diff --git a/qapi/qom.json b/qapi/qom.json index 8d7c968fbd88..ec108e887344 100644 --- a/qapi/qom.json +++ b/qapi/qom.json @@ -662,6 +662,24 @@ 'data': { 'subnqn': 'str', '*uuid': 'str' } } +## +# @NvmeNamespaceProperties: +# +# Properties for x-nvme-ns objects. +# +# @subsys: nvme controller to attach to +# +# @nsid: namespace identifier to assign +# +# Since: 6.1 +## +{ 'struct': 'NvmeNamespaceProperties', + 'data': { 'subsys': 'str', + '*nsid': 'uint32', + '*eui64': 'str', + '*uuid': 'str', + '*attached-ctrls': ['str'] } } + ## # @PrManagerHelperProperties: # From patchwork Tue Sep 14 20:37:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 1528135 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=irrelevant.dk header.i=@irrelevant.dk header.a=rsa-sha256 header.s=fm1 header.b=IWQUy5uf; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=YBf0mhcq; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4H8Frp61D8z9sQt for ; Wed, 15 Sep 2021 06:51:54 +1000 (AEST) Received: from localhost ([::1]:40796 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mQFPD-0007PD-4s for incoming@patchwork.ozlabs.org; Tue, 14 Sep 2021 16:51:52 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55130) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFCF-00065Q-Ne; Tue, 14 Sep 2021 16:38:27 -0400 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:47255) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFCB-0004zD-2S; Tue, 14 Sep 2021 16:38:27 -0400 Received: from compute2.internal (compute2.nyi.internal [10.202.2.42]) by mailout.west.internal (Postfix) with ESMTP id 9E3DE32009DD; Tue, 14 Sep 2021 16:38:20 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute2.internal (MEProxy); Tue, 14 Sep 2021 16:38:21 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=krWv4DZaVAwa4 zsbBSrF36zHhEQ3XXtWRynqo7iLJHA=; b=IWQUy5ufgzFdkmHWCUUMKjIpYhnqu wvLfM9BAe7aNmhq3r4LVqN4hYVfmU51Id7V8SZ1PRImeVU3tGrhCRA+dUEZEGBlA NR6zCVTop0mHo3Romv/ohKBKCza8/QnUZZjgFSSJKYr/OSmIfCzgMuDexMc5s+n1 WHtwx6HnVIushzIvg99poaM7ECu8qeBlWrrwc3FiYDNRlRz3ba4ze0mTmfFvCS4J Oeqjbs9ctb8QXkewNeYs9UZ1PElRfb027i0D7UqnPI8O/QBCfxyuSlssmgZQbQjg pPZdmALdV0JsqYb9+jc1fRWaNe0aCirzd2u5dNuao12063MVykrjxh61Q== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=krWv4DZaVAwa4zsbBSrF36zHhEQ3XXtWRynqo7iLJHA=; b=YBf0mhcq bJG/QE+EsX3gQHw9dKjObuzEDZwhavifiROaHT9VRHq9/4pUT26/50aernptTWPu 5PQDBt08mQBcdGAtKYlWs1LYWAG3Avi9+YQZG5gIrCjkeas4KE9IQN6wseb4MKTa uXdjH1cXYSNBUQ+dok4gEpRsALIv3UT2VUQRt7KNxFyYDZhlgNI+F2zz8mxkgHx4 EGFlmZTtY4SGjd2F9oQIONXyTaKRmaeCLBnjQ3er8mbX5HQu8xqvcpneBYRawYJ8 XqibpH9BqNxUPwQ/vcjue1NgKnjpEjPENzbaYDnwRg3THhT+fmr9qOjFxlZB9fqX TFFzFbI22P45Jw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudegledgudegkecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghu shculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrth htvghrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffev gfeknecuvehluhhsthgvrhfuihiivgepvdenucfrrghrrghmpehmrghilhhfrhhomhepih htshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 14 Sep 2021 16:38:18 -0400 (EDT) From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH RFC 12/13] hw/nvme: add experimental objects x-nvme-ns-{nvm, zoned} Date: Tue, 14 Sep 2021 22:37:36 +0200 Message-Id: <20210914203737.182571-13-its@irrelevant.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210914203737.182571-1-its@irrelevant.dk> References: <20210914203737.182571-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.19; envelope-from=its@irrelevant.dk; helo=wout3-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , qemu-block@nongnu.org, Klaus Jensen , Markus Armbruster , Klaus Jensen , Hanna Reitz , Stefan Hajnoczi , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen Add implementations of namespaces that supports the NVM and Zoned Command Sets. Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 11 +- hw/nvme/dif.h | 2 + hw/nvme/meson.build | 2 +- hw/nvme/ns-nvm.c | 360 +++++++++++++++++++++++++++++++++++ hw/nvme/ns-zoned.c | 449 ++++++++++++++++++++++++++++++++++++++++++++ hw/nvme/ns.c | 281 +++------------------------ hw/nvme/nvm.h | 65 +++++++ hw/nvme/nvme.h | 96 +--------- hw/nvme/zoned.h | 48 +++++ qapi/qom.json | 48 +++++ 10 files changed, 1010 insertions(+), 352 deletions(-) create mode 100644 hw/nvme/ns-nvm.c create mode 100644 hw/nvme/ns-zoned.c create mode 100644 hw/nvme/nvm.h diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index 563a8f8ad1df..04e564ad6be6 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -164,6 +164,7 @@ #include "nvme.h" #include "dif.h" +#include "nvm.h" #include "zoned.h" #include "trace.h" @@ -5342,7 +5343,7 @@ static void nvme_format_set(NvmeNamespace *ns, NvmeCmd *cmd) nvm->id_ns.dps = (pil << 3) | pi; nvm->id_ns.flbas = lbaf | (mset << 4); - nvme_ns_nvm_init_format(nvm); + nvme_ns_nvm_configure_format(nvm); } static void nvme_format_ns_cb(void *opaque, int ret) @@ -6611,10 +6612,14 @@ static void nvme_legacy_realize(PCIDevice *pci_dev, Error **errp) /* setup a namespace if the controller drive property was given */ if (ctrl->namespace.blkconf.blk) { NvmeNamespaceDevice *nsdev = &ctrl->namespace; - NvmeNamespace *ns = &nsdev->ns; + NvmeNamespace *ns = NVME_NAMESPACE(nsdev->ns); + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); ns->nsid = 1; - nvme_ns_init(ns); + ns->csi = NVME_CSI_NVM; + + nvme_ns_nvm_configure_identify(ns); + nvme_ns_nvm_configure_format(nvm); nvme_attach_ns(n, ns); } diff --git a/hw/nvme/dif.h b/hw/nvme/dif.h index 53a22bc7c78e..81efb95cd391 100644 --- a/hw/nvme/dif.h +++ b/hw/nvme/dif.h @@ -1,6 +1,8 @@ #ifndef HW_NVME_DIF_H #define HW_NVME_DIF_H +#include "nvm.h" + /* from Linux kernel (crypto/crct10dif_common.c) */ static const uint16_t t10_dif_crc_table[256] = { 0x0000, 0x8BB7, 0x9CD9, 0x176E, 0xB205, 0x39B2, 0x2EDC, 0xA56B, diff --git a/hw/nvme/meson.build b/hw/nvme/meson.build index 3cf40046eea9..2bb8354bcb57 100644 --- a/hw/nvme/meson.build +++ b/hw/nvme/meson.build @@ -1 +1 @@ -softmmu_ss.add(when: 'CONFIG_NVME_PCI', if_true: files('ctrl.c', 'dif.c', 'ns.c', 'subsys.c')) +softmmu_ss.add(when: 'CONFIG_NVME_PCI', if_true: files('ctrl.c', 'dif.c', 'ns.c', 'ns-nvm.c', 'ns-zoned.c', 'subsys.c')) diff --git a/hw/nvme/ns-nvm.c b/hw/nvme/ns-nvm.c new file mode 100644 index 000000000000..afb0482ab9e8 --- /dev/null +++ b/hw/nvme/ns-nvm.c @@ -0,0 +1,360 @@ +#include "qemu/osdep.h" +#include "qemu/units.h" +#include "qemu/error-report.h" +#include "qapi/error.h" +#include "qapi/visitor.h" +#include "qom/object_interfaces.h" +#include "sysemu/sysemu.h" +#include "sysemu/block-backend.h" + +#include "nvme.h" +#include "nvm.h" + +#include "trace.h" + +static char *get_blockdev(Object *obj, Error **errp) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(obj); + const char *value; + + value = blk_name(nvm->blk); + if (strcmp(value, "") == 0) { + BlockDriverState *bs = blk_bs(nvm->blk); + if (bs) { + value = bdrv_get_node_name(bs); + } + } + + return g_strdup(value); +} + +static void set_blockdev(Object *obj, const char *str, Error **errp) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(obj); + + g_free(nvm->blk_nodename); + nvm->blk_nodename = g_strdup(str); +} + +static void get_lba_size(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(obj); + uint64_t lba_size = nvm->lbasz; + + visit_type_size(v, name, &lba_size, errp); +} + +static void set_lba_size(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(obj); + uint64_t lba_size; + + if (!nvme_ns_prop_writable(obj, name, errp)) { + return; + } + + if (!visit_type_size(v, name, &lba_size, errp)) { + return; + } + + nvm->lbasz = lba_size; + nvm->lbaf.ds = 31 - clz32(nvm->lbasz); +} + +static void get_metadata_size(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(obj); + uint16_t value = nvm->lbaf.ms; + + visit_type_uint16(v, name, &value, errp); +} + +static void set_metadata_size(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(obj); + uint16_t value; + + if (!nvme_ns_prop_writable(obj, name, errp)) { + return; + } + + if (!visit_type_uint16(v, name, &value, errp)) { + return; + } + + nvm->lbaf.ms = value; +} + +static int get_pi(Object *obj, Error **errp) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(obj); + return nvm->id_ns.dps & NVME_ID_NS_DPS_TYPE_MASK; +} + +static void set_pi(Object *obj, int pi_type, Error **errp) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(obj); + + if (!nvme_ns_prop_writable(obj, "pi-type", errp)) { + return; + } + + nvm->id_ns.dps |= (nvm->id_ns.dps & ~NVME_ID_NS_DPS_TYPE_MASK) | pi_type; +} + +static bool get_pil(Object *obj, Error **errp) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(obj); + return nvm->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT; +} + +static void set_pil(Object *obj, bool first, Error **errp) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(obj); + + if (!nvme_ns_prop_writable(obj, "pi-first", errp)) { + return; + } + + if (!first) { + return; + } + + nvm->id_ns.dps |= NVME_NS_NVM_PI_FIRST; +} + +static bool get_extended_lba(Object *obj, Error **errp) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(obj); + return nvm->flags & NVME_NS_NVM_EXTENDED_LBA; +} + +static void set_extended_lba(Object *obj, bool extended, Error **errp) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(obj); + + if (!nvme_ns_prop_writable(obj, "extended-lba", errp)) { + return; + } + + if (extended) { + nvm->flags |= NVME_NS_NVM_EXTENDED_LBA; + } else { + nvm->flags &= ~NVME_NS_NVM_EXTENDED_LBA; + } +} + +void nvme_ns_nvm_configure_format(NvmeNamespaceNvm *nvm) +{ + NvmeIdNs *id_ns = &nvm->id_ns; + BlockDriverInfo bdi; + int npdg, nlbas, ret; + uint32_t discard_granularity = MAX(nvm->lbasz, 4096); + + nvm->lbaf = id_ns->lbaf[NVME_ID_NS_FLBAS_INDEX(id_ns->flbas)]; + nvm->lbasz = 1 << nvm->lbaf.ds; + + if (nvm->lbaf.ms && nvm->flags & NVME_NS_NVM_EXTENDED_LBA) { + id_ns->flbas |= NVME_ID_NS_FLBAS_EXTENDED; + } + + nlbas = nvm->size / (nvm->lbasz + nvm->lbaf.ms); + + id_ns->nsze = cpu_to_le64(nlbas); + + /* no thin provisioning */ + id_ns->ncap = id_ns->nsze; + id_ns->nuse = id_ns->ncap; + + nvm->moff = nlbas * nvm->lbasz; + + npdg = discard_granularity / nvm->lbasz; + + ret = bdrv_get_info(blk_bs(nvm->blk), &bdi); + if (ret >= 0 && bdi.cluster_size > discard_granularity) { + npdg = bdi.cluster_size / nvm->lbasz; + } + + id_ns->npda = id_ns->npdg = npdg - 1; +} + +void nvme_ns_nvm_configure_identify(NvmeNamespace *ns) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); + NvmeIdNs *id_ns = &nvm->id_ns; + + static const NvmeLBAF default_lba_formats[16] = { + [0] = { .ds = 9 }, + [1] = { .ds = 9, .ms = 8 }, + [2] = { .ds = 9, .ms = 16 }, + [3] = { .ds = 9, .ms = 64 }, + [4] = { .ds = 12 }, + [5] = { .ds = 12, .ms = 8 }, + [6] = { .ds = 12, .ms = 16 }, + [7] = { .ds = 12, .ms = 64 }, + }; + + id_ns->dlfeat = 0x1; + + /* support DULBE and I/O optimization fields */ + id_ns->nsfeat = 0x4 | 0x10; + + if (ns->flags & NVME_NS_SHARED) { + id_ns->nmic |= NVME_NMIC_NS_SHARED; + } + + /* eui64 is always stored in big-endian form */ + id_ns->eui64 = ns->eui64.v; + id_ns->nguid.eui = id_ns->eui64; + + id_ns->mc = NVME_ID_NS_MC_EXTENDED | NVME_ID_NS_MC_SEPARATE; + + id_ns->dpc = 0x1f; + + memcpy(&id_ns->lbaf, &default_lba_formats, sizeof(id_ns->lbaf)); + id_ns->nlbaf = 7; + + for (int i = 0; i <= id_ns->nlbaf; i++) { + NvmeLBAF *lbaf = &id_ns->lbaf[i]; + + if (lbaf->ds == nvm->lbaf.ds && lbaf->ms == nvm->lbaf.ms) { + id_ns->flbas |= i; + return; + } + } + + /* add non-standard lba format */ + id_ns->nlbaf++; + id_ns->lbaf[id_ns->nlbaf].ds = nvm->lbaf.ds; + id_ns->lbaf[id_ns->nlbaf].ms = nvm->lbaf.ms; + id_ns->flbas |= id_ns->nlbaf; +} + +int nvme_ns_nvm_configure(NvmeNamespace *ns, Error **errp) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); + BlockBackend *blk; + int ret; + + blk = blk_by_name(nvm->blk_nodename); + if (!blk) { + BlockDriverState *bs = bdrv_lookup_bs(NULL, nvm->blk_nodename, NULL); + if (bs) { + blk = blk_new(qemu_get_aio_context(), 0, BLK_PERM_ALL); + + ret = blk_insert_bs(blk, bs, errp); + if (ret < 0) { + blk_unref(blk); + return -1; + } + } + } + + if (!blk) { + error_setg(errp, "invalid blockdev '%s'", nvm->blk_nodename); + return -1; + } + + blk_ref(blk); + blk_iostatus_reset(blk); + + nvm->blk = blk; + + nvm->size = blk_getlength(nvm->blk); + if (nvm->size < 0) { + error_setg_errno(errp, -(nvm->size), "could not get blockdev size"); + return -1; + } + + ns->csi = NVME_CSI_NVM; + + nvme_ns_nvm_configure_identify(ns); + nvme_ns_nvm_configure_format(nvm); + + return 0; +} + +static void nvme_ns_nvm_instance_init(Object *obj) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(obj); + + nvm->lbasz = 4096; + nvm->lbaf.ds = 31 - clz32(nvm->lbasz); +} + +int nvme_ns_nvm_check_params(NvmeNamespace *ns, Error **errp) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); + int pi_type = nvm->id_ns.dps & NVME_ID_NS_DPS_TYPE_MASK; + + if (pi_type && nvm->lbaf.ms < 8) { + error_setg(errp, "at least 8 bytes of metadata required to enable " + "protection information"); + return -1; + } + + return 0; +} + +static void nvme_ns_nvm_class_init(ObjectClass *oc, void *data) +{ + NvmeNamespaceClass *nc = NVME_NAMESPACE_CLASS(oc); + + object_class_property_add_str(oc, "blockdev", get_blockdev, set_blockdev); + object_class_property_set_description(oc, "blockdev", + "node name or identifier of a " + "block device to use as a backend"); + + object_class_property_add(oc, "lba-size", "size", + get_lba_size, set_lba_size, + NULL, NULL); + object_class_property_set_description(oc, "lba-size", + "logical block size"); + + object_class_property_add(oc, "metadata-size", "uint16", + get_metadata_size, set_metadata_size, + NULL, NULL); + object_class_property_set_description(oc, "metadata-size", + "metadata size (default: 0)"); + + object_class_property_add_bool(oc, "extended-lba", + get_extended_lba, set_extended_lba); + object_class_property_set_description(oc, "extended-lba", + "use extended logical blocks " + "(default: off)"); + + object_class_property_add_enum(oc, "pi-type", "NvmeProtInfoType", + &NvmeProtInfoType_lookup, + get_pi, set_pi); + object_class_property_set_description(oc, "pi-type", + "protection information type " + "(default: none)"); + + object_class_property_add_bool(oc, "pi-first", get_pil, set_pil); + object_class_property_set_description(oc, "pi-first", + "transfer protection information " + "as the first eight bytes of " + "metadata (default: off)"); + + nc->check_params = nvme_ns_nvm_check_params; + nc->configure = nvme_ns_nvm_configure; +} + +static const TypeInfo nvme_ns_nvm_info = { + .name = TYPE_NVME_NAMESPACE_NVM, + .parent = TYPE_NVME_NAMESPACE, + .class_init = nvme_ns_nvm_class_init, + .instance_size = sizeof(NvmeNamespaceNvm), + .instance_init = nvme_ns_nvm_instance_init, +}; + +static void register_types(void) +{ + type_register_static(&nvme_ns_nvm_info); +} + +type_init(register_types); diff --git a/hw/nvme/ns-zoned.c b/hw/nvme/ns-zoned.c new file mode 100644 index 000000000000..19e45f40f0e9 --- /dev/null +++ b/hw/nvme/ns-zoned.c @@ -0,0 +1,449 @@ +/* + * QEMU NVM Express Virtual Zoned Namespace + * + * Copyright (C) 2020 Western Digital Corporation or its affiliates. + * Copyright (c) 2021 Samsung Electronics + * + * Authors: + * Dmitry Fomichev + * Klaus Jensen + * + * This work is licensed under the terms of the GNU GPL, version 2. See the + * COPYING file in the top-level directory. + * + */ + +#include "qemu/osdep.h" +#include "qemu/units.h" +#include "qemu/error-report.h" +#include "qapi/error.h" +#include "qom/object_interfaces.h" +#include "sysemu/sysemu.h" +#include "sysemu/block-backend.h" + +#include "nvme.h" +#include "zoned.h" + +#include "trace.h" + +static void get_zone_size(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(obj); + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(obj); + uint64_t value = zoned->zone_size << nvm->lbaf.ds; + + visit_type_size(v, name, &value, errp); +} + +static void set_zone_size(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(obj); + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(obj); + uint64_t value; + + if (!nvme_ns_prop_writable(obj, name, errp)) { + return; + } + + if (!visit_type_size(v, name, &value, errp)) { + return; + } + + zoned->zone_size = value >> nvm->lbaf.ds; +} + +static void get_zone_capacity(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(obj); + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(obj); + uint64_t value = zoned->zone_capacity << nvm->lbaf.ds; + + visit_type_size(v, name, &value, errp); +} + +static void set_zone_capacity(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(obj); + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(obj); + uint64_t value; + + if (!nvme_ns_prop_writable(obj, name, errp)) { + return; + } + + if (!visit_type_size(v, name, &value, errp)) { + return; + } + + zoned->zone_capacity = value >> nvm->lbaf.ds; +} + +static void get_zone_max_active(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(obj); + + visit_type_uint32(v, name, &zoned->max_active_zones, errp); +} + +static void set_zone_max_active(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(obj); + + if (!nvme_ns_prop_writable(obj, name, errp)) { + return; + } + + if (!visit_type_uint32(v, name, &zoned->max_active_zones, errp)) { + return; + } +} + +static void get_zone_max_open(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(obj); + + visit_type_uint32(v, name, &zoned->max_open_zones, errp); +} + +static void set_zone_max_open(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(obj); + + if (!nvme_ns_prop_writable(obj, name, errp)) { + return; + } + + if (!visit_type_uint32(v, name, &zoned->max_open_zones, errp)) { + return; + } +} + +static bool get_zone_cross_read(Object *obj, Error **errp) +{ + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(obj); + return zoned->flags & NVME_NS_ZONED_CROSS_READ; +} + +static void set_zone_cross_read(Object *obj, bool cross_read, Error **errp) +{ + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(obj); + + if (!nvme_ns_prop_writable(obj, "zone-cross-read", errp)) { + return; + } + + if (cross_read) { + zoned->flags |= NVME_NS_ZONED_CROSS_READ; + } else { + zoned->flags &= ~NVME_NS_ZONED_CROSS_READ; + } +} + +static void get_zone_descriptor_extension_size(Object *obj, Visitor *v, + const char *name, void *opaque, + Error **errp) +{ + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(obj); + uint64_t value = zoned->zd_extension_size; + + visit_type_size(v, name, &value, errp); +} + +static void set_zone_descriptor_extension_size(Object *obj, Visitor *v, + const char *name, void *opaque, + Error **errp) +{ + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(obj); + uint64_t value; + + if (!nvme_ns_prop_writable(obj, name, errp)) { + return; + } + + if (!visit_type_size(v, name, &value, errp)) { + return; + } + + if (value & 0x3f) { + error_setg(errp, "zone descriptor extension size must be a " + "multiple of 64 bytes"); + return; + } + if ((value >> 6) > 0xff) { + error_setg(errp, + "zone descriptor extension size is too large"); + return; + } + + zoned->zd_extension_size = value; +} + +void nvme_ns_zoned_init_state(NvmeNamespaceZoned *zoned) +{ + uint64_t start = 0, zone_size = zoned->zone_size; + uint64_t capacity = zoned->num_zones * zone_size; + NvmeZone *zone; + int i; + + zoned->zone_array = g_new0(NvmeZone, zoned->num_zones); + if (zoned->zd_extension_size) { + zoned->zd_extensions = g_malloc0(zoned->zd_extension_size * + zoned->num_zones); + } + + QTAILQ_INIT(&zoned->exp_open_zones); + QTAILQ_INIT(&zoned->imp_open_zones); + QTAILQ_INIT(&zoned->closed_zones); + QTAILQ_INIT(&zoned->full_zones); + + zone = zoned->zone_array; + for (i = 0; i < zoned->num_zones; i++, zone++) { + if (start + zone_size > capacity) { + zone_size = capacity - start; + } + zone->d.zt = NVME_ZONE_TYPE_SEQ_WRITE; + nvme_zoned_set_zs(zone, NVME_ZONE_STATE_EMPTY); + zone->d.za = 0; + zone->d.zcap = zoned->zone_capacity; + zone->d.zslba = start; + zone->d.wp = start; + zone->w_ptr = start; + start += zone_size; + } + + zoned->zone_size_log2 = 0; + if (is_power_of_2(zoned->zone_size)) { + zoned->zone_size_log2 = 63 - clz64(zoned->zone_size); + } +} + +int nvme_ns_zoned_configure(NvmeNamespace *ns, Error **errp) +{ + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); + NvmeIdNsZoned *id_ns_z = &zoned->id_ns; + int i; + + if (nvme_ns_nvm_configure(ns, errp)) { + return -1; + } + + zoned->num_zones = le64_to_cpu(nvm->id_ns.nsze) / zoned->zone_size; + + if (zoned->max_active_zones && !zoned->max_open_zones) { + zoned->max_open_zones = zoned->max_active_zones; + } + + if (!zoned->num_zones) { + error_setg(errp, + "insufficient namespace size; must be at least the size " + "of one zone (%"PRIu64"B)", zoned->zone_size); + return -1; + } + + nvme_ns_zoned_init_state(zoned); + + /* MAR/MOR are zeroes-based, FFFFFFFFFh means no limit */ + id_ns_z->mar = cpu_to_le32(zoned->max_active_zones - 1); + id_ns_z->mor = cpu_to_le32(zoned->max_open_zones - 1); + id_ns_z->zoc = 0; + + if (zoned->flags & NVME_NS_ZONED_CROSS_READ) { + id_ns_z->ozcs |= NVME_ID_NS_ZONED_OZCS_CROSS_READ; + } + + for (i = 0; i <= nvm->id_ns.nlbaf; i++) { + id_ns_z->lbafe[i].zsze = cpu_to_le64(zoned->zone_size); + id_ns_z->lbafe[i].zdes = + zoned->zd_extension_size >> 6; /* Units of 64B */ + } + + ns->csi = NVME_CSI_ZONED; + nvm->id_ns.nsze = cpu_to_le64(zoned->num_zones * zoned->zone_size); + nvm->id_ns.ncap = nvm->id_ns.nsze; + nvm->id_ns.nuse = nvm->id_ns.ncap; + + /* + * The device uses the BDRV_BLOCK_ZERO flag to determine the "deallocated" + * status of logical blocks. Since the spec defines that logical blocks + * SHALL be deallocated when then zone is in the Empty or Offline states, + * we can only support DULBE if the zone size is a multiple of the + * calculated NPDG. + */ + if (zoned->zone_size % (nvm->id_ns.npdg + 1)) { + warn_report("the zone size (%"PRIu64" blocks) is not a multiple of " + "the calculated deallocation granularity (%d blocks); " + "DULBE support disabled", + zoned->zone_size, nvm->id_ns.npdg + 1); + + nvm->id_ns.nsfeat &= ~0x4; + } + + return 0; +} + +void nvme_ns_zoned_clear_zone(NvmeNamespaceZoned *zoned, NvmeZone *zone) +{ + uint8_t state; + + zone->w_ptr = zone->d.wp; + state = nvme_zoned_zs(zone); + if (zone->d.wp != zone->d.zslba || + (zone->d.za & NVME_ZA_ZD_EXT_VALID)) { + if (state != NVME_ZONE_STATE_CLOSED) { + trace_pci_nvme_clear_ns_close(state, zone->d.zslba); + nvme_zoned_set_zs(zone, NVME_ZONE_STATE_CLOSED); + } + nvme_ns_zoned_aor_inc_active(zoned); + QTAILQ_INSERT_HEAD(&zoned->closed_zones, zone, entry); + } else { + trace_pci_nvme_clear_ns_reset(state, zone->d.zslba); + nvme_zoned_set_zs(zone, NVME_ZONE_STATE_EMPTY); + } +} + +/* + * Close all the zones that are currently open. + */ +void nvme_ns_zoned_shutdown(NvmeNamespace *ns) +{ + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); + NvmeZone *zone, *next; + + QTAILQ_FOREACH_SAFE(zone, &zoned->closed_zones, entry, next) { + QTAILQ_REMOVE(&zoned->closed_zones, zone, entry); + nvme_ns_zoned_aor_dec_active(zoned); + nvme_ns_zoned_clear_zone(zoned, zone); + } + QTAILQ_FOREACH_SAFE(zone, &zoned->imp_open_zones, entry, next) { + QTAILQ_REMOVE(&zoned->imp_open_zones, zone, entry); + nvme_ns_zoned_aor_dec_open(zoned); + nvme_ns_zoned_aor_dec_active(zoned); + nvme_ns_zoned_clear_zone(zoned, zone); + } + QTAILQ_FOREACH_SAFE(zone, &zoned->exp_open_zones, entry, next) { + QTAILQ_REMOVE(&zoned->exp_open_zones, zone, entry); + nvme_ns_zoned_aor_dec_open(zoned); + nvme_ns_zoned_aor_dec_active(zoned); + nvme_ns_zoned_clear_zone(zoned, zone); + } + + assert(zoned->nr_open_zones == 0); +} + +static void nvme_ns_zoned_instance_init(Object *obj) +{ + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(obj); + + /* assuming 4k default lba-size, set default zone size/capacity to 1M */ + zoned->zone_size = 4096; + zoned->zone_capacity = zoned->zone_size; +} + +static int nvme_ns_zoned_check_params(NvmeNamespace *ns, Error **errp) +{ + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); + + if (nvme_ns_nvm_check_params(ns, errp)) { + return -1; + } + + if (zoned->zone_size < nvm->lbaf.ds) { + error_setg(errp, "'zone-size' must be at least %"PRIu64" bytes", + nvm->lbasz); + return -1; + } + + if (zoned->zone_capacity < nvm->lbaf.ds) { + error_setg(errp, "'zone-capacity' must be at least %"PRIu64" bytes", + nvm->lbasz); + return -1; + } + + if (zoned->zone_capacity > zoned->zone_size) { + error_setg(errp, "'zone-capacity' must not exceed 'zone-size'"); + return -1; + } + + if (zoned->max_active_zones) { + if (zoned->max_open_zones > zoned->max_active_zones) { + error_setg(errp, "'zone-max-open' must not exceed 'zone-max-active'"); + return -1; + } + + if (!zoned->max_open_zones) { + zoned->max_open_zones = zoned->max_active_zones; + } + } + + return 0; +} + +static void nvme_ns_zoned_class_init(ObjectClass *oc, void *data) +{ + NvmeNamespaceClass *nc = NVME_NAMESPACE_CLASS(oc); + + object_class_property_add(oc, "zone-size", "size", + get_zone_size, set_zone_size, + NULL, NULL); + object_class_property_set_description(oc, "zone-size", "zone size"); + + object_class_property_add(oc, "zone-capacity", "size", + get_zone_capacity, set_zone_capacity, + NULL, NULL); + object_class_property_set_description(oc, "zone-capacity", + "zone capacity"); + + object_class_property_add_bool(oc, "zone-cross-read", + get_zone_cross_read, set_zone_cross_read); + object_class_property_set_description(oc, "zone-cross-read", + "allow reads to cross zone " + "boundaries"); + + object_class_property_add(oc, "zone-descriptor-extension-size", "size", + get_zone_descriptor_extension_size, + set_zone_descriptor_extension_size, + NULL, NULL); + object_class_property_set_description(oc, "zone-descriptor-extension-size", + "zone descriptor extension size"); + + object_class_property_add(oc, "zone-max-active", "uint32", + get_zone_max_active, set_zone_max_active, + NULL, NULL); + object_class_property_set_description(oc, "zone-max-active", + "maximum number of active zones"); + + object_class_property_add(oc, "zone-max-open", "uint32", + get_zone_max_open, set_zone_max_open, + NULL, NULL); + object_class_property_set_description(oc, "zone-max-open", + "maximum number of open zones"); + + nc->check_params = nvme_ns_zoned_check_params; + nc->configure = nvme_ns_zoned_configure; + nc->shutdown = nvme_ns_zoned_shutdown; +} + +static const TypeInfo nvme_ns_zoned_info = { + .name = TYPE_NVME_NAMESPACE_ZONED, + .parent = TYPE_NVME_NAMESPACE_NVM, + .class_init = nvme_ns_zoned_class_init, + .instance_size = sizeof(NvmeNamespaceZoned), + .instance_init = nvme_ns_zoned_instance_init, +}; + +static void register_types(void) +{ + type_register_static(&nvme_ns_zoned_info); +} + +type_init(register_types); diff --git a/hw/nvme/ns.c b/hw/nvme/ns.c index 05828fbb48a5..cd46f471dad6 100644 --- a/hw/nvme/ns.c +++ b/hw/nvme/ns.c @@ -30,107 +30,10 @@ #define MIN_DISCARD_GRANULARITY (4 * KiB) -void nvme_ns_nvm_init_format(NvmeNamespaceNvm *nvm) -{ - NvmeIdNs *id_ns = &nvm->id_ns; - BlockDriverInfo bdi; - int npdg, nlbas, ret; - - nvm->lbaf = id_ns->lbaf[NVME_ID_NS_FLBAS_INDEX(id_ns->flbas)]; - nvm->lbasz = 1 << nvm->lbaf.ds; - - nlbas = nvm->size / (nvm->lbasz + nvm->lbaf.ms); - - id_ns->nsze = cpu_to_le64(nlbas); - - /* no thin provisioning */ - id_ns->ncap = id_ns->nsze; - id_ns->nuse = id_ns->ncap; - - nvm->moff = (int64_t)nlbas << nvm->lbaf.ds; - - npdg = nvm->discard_granularity / nvm->lbasz; - - ret = bdrv_get_info(blk_bs(nvm->blk), &bdi); - if (ret >= 0 && bdi.cluster_size > nvm->discard_granularity) { - npdg = bdi.cluster_size / nvm->lbasz; - } - - id_ns->npda = id_ns->npdg = npdg - 1; -} - -void nvme_ns_init(NvmeNamespace *ns) -{ - NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); - NvmeIdNs *id_ns = &nvm->id_ns; - uint8_t ds; - uint16_t ms; - int i; - - id_ns->dlfeat = 0x1; - - /* support DULBE and I/O optimization fields */ - id_ns->nsfeat |= (0x4 | 0x10); - - if (ns->flags & NVME_NS_SHARED) { - id_ns->nmic |= NVME_NMIC_NS_SHARED; - } - - /* simple copy */ - id_ns->mssrl = cpu_to_le16(nvm->mssrl); - id_ns->mcl = cpu_to_le32(nvm->mcl); - id_ns->msrc = nvm->msrc; - id_ns->eui64 = cpu_to_be64(ns->eui64.v); - - ds = 31 - clz32(nvm->lbasz); - ms = nvm->lbaf.ms; - - id_ns->mc = NVME_ID_NS_MC_EXTENDED | NVME_ID_NS_MC_SEPARATE; - - if (ms && nvm->flags & NVME_NS_NVM_EXTENDED_LBA) { - id_ns->flbas |= NVME_ID_NS_FLBAS_EXTENDED; - } - - id_ns->dpc = 0x1f; - - static const NvmeLBAF lbaf[16] = { - [0] = { .ds = 9 }, - [1] = { .ds = 9, .ms = 8 }, - [2] = { .ds = 9, .ms = 16 }, - [3] = { .ds = 9, .ms = 64 }, - [4] = { .ds = 12 }, - [5] = { .ds = 12, .ms = 8 }, - [6] = { .ds = 12, .ms = 16 }, - [7] = { .ds = 12, .ms = 64 }, - }; - - memcpy(&id_ns->lbaf, &lbaf, sizeof(lbaf)); - id_ns->nlbaf = 7; - - for (i = 0; i <= id_ns->nlbaf; i++) { - NvmeLBAF *lbaf = &id_ns->lbaf[i]; - if (lbaf->ds == ds) { - if (lbaf->ms == ms) { - id_ns->flbas |= i; - goto lbaf_found; - } - } - } - - /* add non-standard lba format */ - id_ns->nlbaf++; - id_ns->lbaf[id_ns->nlbaf].ds = ds; - id_ns->lbaf[id_ns->nlbaf].ms = ms; - id_ns->flbas |= id_ns->nlbaf; - -lbaf_found: - nvme_ns_nvm_init_format(nvm); -} - static int nvme_nsdev_init_blk(NvmeNamespaceDevice *nsdev, Error **errp) { - NvmeNamespace *ns = &nsdev->ns; + NvmeNamespace *ns = NVME_NAMESPACE(nsdev->ns); NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); BlockConf *blkconf = &nsdev->blkconf; bool read_only; @@ -167,7 +70,7 @@ static int nvme_nsdev_init_blk(NvmeNamespaceDevice *nsdev, static int nvme_nsdev_zoned_check_calc_geometry(NvmeNamespaceDevice *nsdev, Error **errp) { - NvmeNamespace *ns = &nsdev->ns; + NvmeNamespace *ns = NVME_NAMESPACE(nsdev->ns); NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); @@ -206,152 +109,10 @@ static int nvme_nsdev_zoned_check_calc_geometry(NvmeNamespaceDevice *nsdev, */ zoned->zone_size = zone_size / nvm->lbasz; zoned->zone_capacity = zone_cap / nvm->lbasz; - zoned->num_zones = le64_to_cpu(nvm->id_ns.nsze) / zoned->zone_size; - - /* Do a few more sanity checks of ZNS properties */ - if (!zoned->num_zones) { - error_setg(errp, - "insufficient drive capacity, must be at least the size " - "of one zone (%"PRIu64"B)", zone_size); - return -1; - } return 0; } -static void nvme_ns_zoned_init_state(NvmeNamespaceZoned *zoned) -{ - uint64_t start = 0, zone_size = zoned->zone_size; - uint64_t capacity = zoned->num_zones * zone_size; - NvmeZone *zone; - int i; - - zoned->zone_array = g_new0(NvmeZone, zoned->num_zones); - if (zoned->zd_extension_size) { - zoned->zd_extensions = g_malloc0(zoned->zd_extension_size * - zoned->num_zones); - } - - QTAILQ_INIT(&zoned->exp_open_zones); - QTAILQ_INIT(&zoned->imp_open_zones); - QTAILQ_INIT(&zoned->closed_zones); - QTAILQ_INIT(&zoned->full_zones); - - zone = zoned->zone_array; - for (i = 0; i < zoned->num_zones; i++, zone++) { - if (start + zone_size > capacity) { - zone_size = capacity - start; - } - zone->d.zt = NVME_ZONE_TYPE_SEQ_WRITE; - nvme_zoned_set_zs(zone, NVME_ZONE_STATE_EMPTY); - zone->d.za = 0; - zone->d.zcap = zoned->zone_capacity; - zone->d.zslba = start; - zone->d.wp = start; - zone->w_ptr = start; - start += zone_size; - } - - zoned->zone_size_log2 = 0; - if (is_power_of_2(zoned->zone_size)) { - zoned->zone_size_log2 = 63 - clz64(zoned->zone_size); - } -} - -static void nvme_ns_zoned_init(NvmeNamespace *ns) -{ - NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); - NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); - NvmeIdNsZoned *id_ns_z = &zoned->id_ns; - int i; - - nvme_ns_zoned_init_state(zoned); - - /* MAR/MOR are zeroes-based, FFFFFFFFFh means no limit */ - id_ns_z->mar = cpu_to_le32(zoned->max_active_zones - 1); - id_ns_z->mor = cpu_to_le32(zoned->max_open_zones - 1); - id_ns_z->zoc = 0; - - if (zoned->flags & NVME_NS_ZONED_CROSS_READ) { - id_ns_z->ozcs |= NVME_ID_NS_ZONED_OZCS_CROSS_READ; - } - - for (i = 0; i <= nvm->id_ns.nlbaf; i++) { - id_ns_z->lbafe[i].zsze = cpu_to_le64(zoned->zone_size); - id_ns_z->lbafe[i].zdes = - zoned->zd_extension_size >> 6; /* Units of 64B */ - } - - ns->csi = NVME_CSI_ZONED; - nvm->id_ns.nsze = cpu_to_le64(zoned->num_zones * zoned->zone_size); - nvm->id_ns.ncap = nvm->id_ns.nsze; - nvm->id_ns.nuse = nvm->id_ns.ncap; - - /* - * The device uses the BDRV_BLOCK_ZERO flag to determine the "deallocated" - * status of logical blocks. Since the spec defines that logical blocks - * SHALL be deallocated when then zone is in the Empty or Offline states, - * we can only support DULBE if the zone size is a multiple of the - * calculated NPDG. - */ - if (zoned->zone_size % (nvm->id_ns.npdg + 1)) { - warn_report("the zone size (%"PRIu64" blocks) is not a multiple of " - "the calculated deallocation granularity (%d blocks); " - "DULBE support disabled", - zoned->zone_size, nvm->id_ns.npdg + 1); - - nvm->id_ns.nsfeat &= ~0x4; - } -} - -static void nvme_ns_zoned_clear_zone(NvmeNamespaceZoned *zoned, NvmeZone *zone) -{ - uint8_t state; - - zone->w_ptr = zone->d.wp; - state = nvme_zoned_zs(zone); - if (zone->d.wp != zone->d.zslba || - (zone->d.za & NVME_ZA_ZD_EXT_VALID)) { - if (state != NVME_ZONE_STATE_CLOSED) { - trace_pci_nvme_clear_ns_close(state, zone->d.zslba); - nvme_zoned_set_zs(zone, NVME_ZONE_STATE_CLOSED); - } - nvme_ns_zoned_aor_inc_active(zoned); - QTAILQ_INSERT_HEAD(&zoned->closed_zones, zone, entry); - } else { - trace_pci_nvme_clear_ns_reset(state, zone->d.zslba); - nvme_zoned_set_zs(zone, NVME_ZONE_STATE_EMPTY); - } -} - -/* - * Close all the zones that are currently open. - */ -static void nvme_ns_zoned_shutdown(NvmeNamespaceZoned *zoned) -{ - NvmeZone *zone, *next; - - QTAILQ_FOREACH_SAFE(zone, &zoned->closed_zones, entry, next) { - QTAILQ_REMOVE(&zoned->closed_zones, zone, entry); - nvme_ns_zoned_aor_dec_active(zoned); - nvme_ns_zoned_clear_zone(zoned, zone); - } - QTAILQ_FOREACH_SAFE(zone, &zoned->imp_open_zones, entry, next) { - QTAILQ_REMOVE(&zoned->imp_open_zones, zone, entry); - nvme_ns_zoned_aor_dec_open(zoned); - nvme_ns_zoned_aor_dec_active(zoned); - nvme_ns_zoned_clear_zone(zoned, zone); - } - QTAILQ_FOREACH_SAFE(zone, &zoned->exp_open_zones, entry, next) { - QTAILQ_REMOVE(&zoned->exp_open_zones, zone, entry); - nvme_ns_zoned_aor_dec_open(zoned); - nvme_ns_zoned_aor_dec_active(zoned); - nvme_ns_zoned_clear_zone(zoned, zone); - } - - assert(zoned->nr_open_zones == 0); -} - static int nvme_nsdev_check_constraints(NvmeNamespaceDevice *nsdev, Error **errp) { @@ -405,7 +166,8 @@ static int nvme_nsdev_check_constraints(NvmeNamespaceDevice *nsdev, static int nvme_nsdev_setup(NvmeNamespaceDevice *nsdev, Error **errp) { - NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(&nsdev->ns); + NvmeNamespace *ns = NVME_NAMESPACE(nsdev->ns); + NvmeNamespaceNvm *nvm = NVME_NAMESPACE_NVM(ns); static uint64_t ns_count; if (nvme_nsdev_check_constraints(nsdev, errp)) { @@ -413,20 +175,20 @@ static int nvme_nsdev_setup(NvmeNamespaceDevice *nsdev, Error **errp) } if (nsdev->params.shared) { - nsdev->ns.flags |= NVME_NS_SHARED; + ns->flags |= NVME_NS_SHARED; } - nsdev->ns.nsid = nsdev->params.nsid; - memcpy(&nsdev->ns.uuid, &nsdev->params.uuid, sizeof(nsdev->ns.uuid)); + ns->nsid = nsdev->params.nsid; + memcpy(&ns->uuid, &nsdev->params.uuid, sizeof(ns->uuid)); if (nsdev->params.eui64) { - stq_be_p(&nsdev->ns.eui64.v, nsdev->params.eui64); + stq_be_p(&ns->eui64.v, nsdev->params.eui64); } /* Substitute a missing EUI-64 by an autogenerated one */ ++ns_count; - if (!nsdev->ns.eui64.v && nsdev->params.eui64_default) { - nsdev->ns.eui64.v = ns_count + NVME_EUI64_DEFAULT; + if (!ns->eui64.v && nsdev->params.eui64_default) { + ns->eui64.v = ns_count + NVME_EUI64_DEFAULT; } nvm->id_ns.dps = nsdev->params.pi; @@ -434,12 +196,13 @@ static int nvme_nsdev_setup(NvmeNamespaceDevice *nsdev, Error **errp) nvm->id_ns.dps |= NVME_ID_NS_DPS_FIRST_EIGHT; } - nsdev->ns.csi = NVME_CSI_NVM; + ns->csi = NVME_CSI_NVM; - nvme_ns_init(&nsdev->ns); + nvme_ns_nvm_configure_identify(ns); + nvme_ns_nvm_configure_format(nvm); if (nsdev->params.zoned) { - NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(&nsdev->ns); + NvmeNamespaceZoned *zoned = NVME_NAMESPACE_ZONED(ns); if (nvme_nsdev_zoned_check_calc_geometry(nsdev, errp) != 0) { return -1; @@ -453,7 +216,9 @@ static int nvme_nsdev_setup(NvmeNamespaceDevice *nsdev, Error **errp) zoned->flags |= NVME_NS_ZONED_CROSS_READ; } - nvme_ns_zoned_init(&nsdev->ns); + if (nvme_ns_zoned_configure(ns, errp)) { + return -1; + } } return 0; @@ -468,7 +233,7 @@ void nvme_ns_shutdown(NvmeNamespace *ns) { blk_flush(nvme_blk(ns)); if (nvme_ns_zoned(ns)) { - nvme_ns_zoned_shutdown(NVME_NAMESPACE_ZONED(ns)); + nvme_ns_zoned_shutdown(ns); } } @@ -485,7 +250,7 @@ void nvme_ns_cleanup(NvmeNamespace *ns) static void nvme_nsdev_unrealize(DeviceState *dev) { NvmeNamespaceDevice *nsdev = NVME_NAMESPACE_DEVICE(dev); - NvmeNamespace *ns = &nsdev->ns; + NvmeNamespace *ns = NVME_NAMESPACE(nsdev->ns); nvme_ns_drain(ns); nvme_ns_shutdown(ns); @@ -495,7 +260,7 @@ static void nvme_nsdev_unrealize(DeviceState *dev) static void nvme_nsdev_realize(DeviceState *dev, Error **errp) { NvmeNamespaceDevice *nsdev = NVME_NAMESPACE_DEVICE(dev); - NvmeNamespace *ns = &nsdev->ns; + NvmeNamespace *ns = NULL; BusState *s = qdev_get_parent_bus(dev); NvmeCtrlLegacyDevice *ctrl = NVME_DEVICE_LEGACY(s->parent); NvmeState *n = NVME_STATE(ctrl); @@ -525,6 +290,12 @@ static void nvme_nsdev_realize(DeviceState *dev, Error **errp) } } + nsdev->ns = nsdev->params.zoned ? object_new(TYPE_NVME_NAMESPACE_ZONED) : + object_new(TYPE_NVME_NAMESPACE_NVM); + + ns = NVME_NAMESPACE(nsdev->ns); + ns->realized = true; + if (nvme_nsdev_init_blk(nsdev, errp)) { return; } diff --git a/hw/nvme/nvm.h b/hw/nvme/nvm.h new file mode 100644 index 000000000000..c3882ce5c21d --- /dev/null +++ b/hw/nvme/nvm.h @@ -0,0 +1,65 @@ +#ifndef HW_NVME_NVM_H +#define HW_NVME_NVM_H + +#include "nvme.h" + +#define TYPE_NVME_NAMESPACE_NVM "x-nvme-ns-nvm" +OBJECT_DECLARE_SIMPLE_TYPE(NvmeNamespaceNvm, NVME_NAMESPACE_NVM) + +enum { + NVME_NS_NVM_EXTENDED_LBA = 1 << 0, + NVME_NS_NVM_PI_FIRST = 1 << 1, +}; + +typedef struct NvmeNamespaceNvm { + NvmeNamespace parent_obj; + + NvmeIdNs id_ns; + + char *blk_nodename; + BlockBackend *blk; + int64_t size; + int64_t moff; + + NvmeLBAF lbaf; + size_t lbasz; + uint32_t discard_granularity; + + uint16_t mssrl; + uint32_t mcl; + uint8_t msrc; + + unsigned long flags; +} NvmeNamespaceNvm; + +static inline BlockBackend *nvme_blk(NvmeNamespace *ns) +{ + return NVME_NAMESPACE_NVM(ns)->blk; +} + +static inline size_t nvme_l2b(NvmeNamespaceNvm *nvm, uint64_t lba) +{ + return lba << nvm->lbaf.ds; +} + +static inline size_t nvme_m2b(NvmeNamespaceNvm *nvm, uint64_t lba) +{ + return nvm->lbaf.ms * lba; +} + +static inline int64_t nvme_moff(NvmeNamespaceNvm *nvm, uint64_t lba) +{ + return nvm->moff + nvme_m2b(nvm, lba); +} + +static inline bool nvme_ns_ext(NvmeNamespaceNvm *nvm) +{ + return !!NVME_ID_NS_FLBAS_EXTENDED(nvm->id_ns.flbas); +} + +int nvme_ns_nvm_check_params(NvmeNamespace *ns, Error **errp); +int nvme_ns_nvm_configure(NvmeNamespace *ns, Error **errp); +void nvme_ns_nvm_configure_format(NvmeNamespaceNvm *nvm); +void nvme_ns_nvm_configure_identify(NvmeNamespace *ns); + +#endif /* HW_NVME_NVM_H */ diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index 2a623b9eecd6..abac192f0c81 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -21,6 +21,7 @@ #include "qemu/uuid.h" #include "qemu/notify.h" #include "qapi/qapi-builtin-visit.h" +#include "qapi/qapi-types-qom.h" #include "hw/pci/pci.h" #include "hw/block/block.h" @@ -58,6 +59,7 @@ struct NvmeNamespaceClass { int (*check_params)(NvmeNamespace *ns, Error **errp); int (*configure)(NvmeNamespace *ns, Error **errp); + void (*shutdown)(NvmeNamespace *ns); }; #define TYPE_NVME_SUBSYSTEM "x-nvme-subsystem" @@ -143,66 +145,6 @@ typedef struct NvmeNamespaceParams { uint32_t zd_extension_size; } NvmeNamespaceParams; -typedef struct NvmeZone { - NvmeZoneDescr d; - uint64_t w_ptr; - QTAILQ_ENTRY(NvmeZone) entry; -} NvmeZone; - -enum { - NVME_NS_ZONED_CROSS_READ = 1 << 0, -}; - -typedef struct NvmeNamespaceZoned { - NvmeIdNsZoned id_ns; - - uint32_t num_zones; - NvmeZone *zone_array; - - uint64_t zone_size; - uint32_t zone_size_log2; - - uint64_t zone_capacity; - - uint32_t zd_extension_size; - uint8_t *zd_extensions; - - uint32_t max_open_zones; - int32_t nr_open_zones; - uint32_t max_active_zones; - int32_t nr_active_zones; - - unsigned long flags; - - QTAILQ_HEAD(, NvmeZone) exp_open_zones; - QTAILQ_HEAD(, NvmeZone) imp_open_zones; - QTAILQ_HEAD(, NvmeZone) closed_zones; - QTAILQ_HEAD(, NvmeZone) full_zones; -} NvmeNamespaceZoned; - -enum { - NVME_NS_NVM_EXTENDED_LBA = 1 << 0, - NVME_NS_NVM_PI_FIRST = 1 << 1, -}; - -typedef struct NvmeNamespaceNvm { - NvmeIdNs id_ns; - - BlockBackend *blk; - int64_t size; - int64_t moff; - - NvmeLBAF lbaf; - size_t lbasz; - uint32_t discard_granularity; - - uint16_t mssrl; - uint32_t mcl; - uint8_t msrc; - - unsigned long flags; -} NvmeNamespaceNvm; - enum NvmeNamespaceFlags { NVME_NS_SHARED = 1 << 0, }; @@ -234,27 +176,16 @@ typedef struct NvmeNamespace { struct { uint32_t err_rec; } features; - - NvmeNamespaceNvm nvm; - NvmeNamespaceZoned zoned; } NvmeNamespace; bool nvme_ns_prop_writable(Object *obj, const char *name, Error **errp); -#define NVME_NAMESPACE_NVM(ns) (&(ns)->nvm) -#define NVME_NAMESPACE_ZONED(ns) (&(ns)->zoned) - -static inline BlockBackend *nvme_blk(NvmeNamespace *ns) -{ - return NVME_NAMESPACE_NVM(ns)->blk; -} - typedef struct NvmeNamespaceDevice { DeviceState parent_obj; BlockConf blkconf; int32_t bootindex; - NvmeNamespace ns; + Object *ns; NvmeNamespaceParams params; } NvmeNamespaceDevice; @@ -267,27 +198,6 @@ static inline uint32_t nvme_nsid(NvmeNamespace *ns) return 0; } -static inline size_t nvme_l2b(NvmeNamespaceNvm *nvm, uint64_t lba) -{ - return lba << nvm->lbaf.ds; -} - -static inline size_t nvme_m2b(NvmeNamespaceNvm *nvm, uint64_t lba) -{ - return nvm->lbaf.ms * lba; -} - -static inline int64_t nvme_moff(NvmeNamespaceNvm *nvm, uint64_t lba) -{ - return nvm->moff + nvme_m2b(nvm, lba); -} - -static inline bool nvme_ns_ext(NvmeNamespaceNvm *nvm) -{ - return !!NVME_ID_NS_FLBAS_EXTENDED(nvm->id_ns.flbas); -} - -void nvme_ns_nvm_init_format(NvmeNamespaceNvm *nvm); void nvme_ns_init(NvmeNamespace *ns); void nvme_ns_drain(NvmeNamespace *ns); void nvme_ns_shutdown(NvmeNamespace *ns); diff --git a/hw/nvme/zoned.h b/hw/nvme/zoned.h index 277d70aee5c7..6e11ad3f34a2 100644 --- a/hw/nvme/zoned.h +++ b/hw/nvme/zoned.h @@ -4,9 +4,52 @@ #include "qemu/units.h" #include "nvme.h" +#include "nvm.h" #define NVME_DEFAULT_ZONE_SIZE (128 * MiB) +#define TYPE_NVME_NAMESPACE_ZONED "x-nvme-ns-zoned" +OBJECT_DECLARE_SIMPLE_TYPE(NvmeNamespaceZoned, NVME_NAMESPACE_ZONED) + +typedef struct NvmeZone { + NvmeZoneDescr d; + uint64_t w_ptr; + QTAILQ_ENTRY(NvmeZone) entry; +} NvmeZone; + +enum { + NVME_NS_ZONED_CROSS_READ = 1 << 0, +}; + +typedef struct NvmeNamespaceZoned { + NvmeNamespaceNvm parent_obj; + + NvmeIdNsZoned id_ns; + + uint32_t num_zones; + NvmeZone *zone_array; + + uint64_t zone_size; + uint32_t zone_size_log2; + + uint64_t zone_capacity; + + uint32_t zd_extension_size; + uint8_t *zd_extensions; + + uint32_t max_open_zones; + int32_t nr_open_zones; + uint32_t max_active_zones; + int32_t nr_active_zones; + + unsigned long flags; + + QTAILQ_HEAD(, NvmeZone) exp_open_zones; + QTAILQ_HEAD(, NvmeZone) imp_open_zones; + QTAILQ_HEAD(, NvmeZone) closed_zones; + QTAILQ_HEAD(, NvmeZone) full_zones; +} NvmeNamespaceZoned; + static inline NvmeZoneState nvme_zoned_zs(NvmeZone *zone) { return zone->d.zs >> 4; @@ -96,4 +139,9 @@ static inline void nvme_ns_zoned_aor_dec_active(NvmeNamespaceZoned *zoned) assert(zoned->nr_active_zones >= 0); } +void nvme_ns_zoned_init_state(NvmeNamespaceZoned *zoned); +int nvme_ns_zoned_configure(NvmeNamespace *ns, Error **errp); +void nvme_ns_zoned_clear_zone(NvmeNamespaceZoned *zoned, NvmeZone *zone); +void nvme_ns_zoned_shutdown(NvmeNamespace *ns); + #endif /* HW_NVME_ZONED_H */ diff --git a/qapi/qom.json b/qapi/qom.json index ec108e887344..feb3cdc98bce 100644 --- a/qapi/qom.json +++ b/qapi/qom.json @@ -680,6 +680,50 @@ '*uuid': 'str', '*attached-ctrls': ['str'] } } +## +# @NvmeProtInfoType: +# +# Indicates the namespace protection information type. +# +# Since: 6.1 +## +{ 'enum': 'NvmeProtInfoType', + 'data': [ 'none', 'type1', 'type2', 'type3' ] } + +## +# @NvmeNamespaceNvmProperties: +# +# Properties for x-nvme-ns-nvm objects. +# +# @pi-type: protection information type +# +# Since: 6.1 +## +{ 'struct': 'NvmeNamespaceNvmProperties', + 'base': 'NvmeNamespaceProperties', + 'data': { 'blockdev': 'str', + '*lba-size': 'size', + '*metadata-size': 'size', + '*extended-lba': 'bool', + '*pi-type': 'NvmeProtInfoType', + '*pi-first': 'bool' } } + +## +# @NvmeNamespaceZonedProperties: +# +# Properties for x-nvme-ns-zoned objects. +# +# Since: 6.1 +## +{ 'struct': 'NvmeNamespaceZonedProperties', + 'base': 'NvmeNamespaceNvmProperties', + 'data': { '*zone-size': 'size', + '*zone-capacity': 'size', + '*zone-cross-read': 'bool', + '*zone-descriptor-extension-size': 'size', + '*zone-max-active': 'uint32', + '*zone-max-open': 'uint32' } } + ## # @PrManagerHelperProperties: # @@ -831,6 +875,8 @@ 'if': 'defined(CONFIG_LINUX)' }, 'memory-backend-ram', 'x-nvme-subsystem', + 'x-nvme-ns-nvm', + 'x-nvme-ns-zoned', 'pef-guest', 'pr-manager-helper', 'qtest', @@ -890,6 +936,8 @@ 'if': 'defined(CONFIG_LINUX)' }, 'memory-backend-ram': 'MemoryBackendProperties', 'x-nvme-subsystem': 'NvmeSubsystemProperties', + 'x-nvme-ns-nvm': 'NvmeNamespaceNvmProperties', + 'x-nvme-ns-zoned': 'NvmeNamespaceZonedProperties', 'pr-manager-helper': 'PrManagerHelperProperties', 'qtest': 'QtestProperties', 'rng-builtin': 'RngProperties', From patchwork Tue Sep 14 20:37:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Klaus Jensen X-Patchwork-Id: 1528139 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=irrelevant.dk header.i=@irrelevant.dk header.a=rsa-sha256 header.s=fm1 header.b=G1wbXb6A; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=messagingengine.com header.i=@messagingengine.com header.a=rsa-sha256 header.s=fm3 header.b=GvdA/AtW; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4H8G0L6pvsz9sRN for ; Wed, 15 Sep 2021 06:58:25 +1000 (AEST) Received: from localhost ([::1]:47920 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mQFVW-000409-Tj for incoming@patchwork.ozlabs.org; Tue, 14 Sep 2021 16:58:23 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:55194) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFCa-0006R8-Q4; Tue, 14 Sep 2021 16:38:48 -0400 Received: from wout3-smtp.messagingengine.com ([64.147.123.19]:51275) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mQFCX-00051F-GR; Tue, 14 Sep 2021 16:38:48 -0400 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.west.internal (Postfix) with ESMTP id 74DF93200A26; Tue, 14 Sep 2021 16:38:23 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Tue, 14 Sep 2021 16:38:24 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=wQx0cVw41BjM8 47u0aYsVl/GmOzPi4CDP2MWeYGWXSs=; b=G1wbXb6AwnFrpuXerXA4SeXzNEIOJ /rdhrKGwAWqHbKJFpij68G9fdl45EcvZE8nPDCiCC+B0uw2g8QxkvCcZMDCpQxDv 8acDa5LqBHfp1URQsPX560Zs7DwV+Ln2sw8hOlU8Km+goM6FGBSETG2jxwKYSbL5 g3UClMmQNrVim0keeOI+dum705IuPkTjWdlZULFrVn1NMVzM5Lx5x11emJbCWkWz wQU/WXEBIQB/SbUnyt2QZopMtng20Qr8Mmwv5W81ep2Zt58fb1Ha3YpG6cAL9xwt hp5Mw3/QC+GRTbW0vA5txk0rBHM/8Szf0FO16Ld+bcRkFTxX2qXJFOHoA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=wQx0cVw41BjM847u0aYsVl/GmOzPi4CDP2MWeYGWXSs=; b=GvdA/AtW dA6UjMLT/9i92u0+tJUzkcAgUi4Tdm8iLFFSMLjjlyt4lM3RUhKQYu+nPBzvmot1 5ETBLU0U+fyaJJXDbakH6w0L2PJnafKT/gW6THETcl9BfCy+Jw6SxT5uWnDFHQYY 5H+7+hASmKA5ebwCSNfukRvbRHRC6fLRXMs4wqi0tou1by2gyg4ICLUtLzLSupYk qa0IrXGW0Q78xTDrlWagpurUr8IPlQCwjKTV+ylRvCI1hPP4a1ZD0O0/+NjfWbTv pQAdInBYA8RQyhQ3OIUy6etJ/sFhcNePSPswvAHuUPkMlzdn2IbbsXvgDJdZ4+r+ SfxzxEELvNil8w== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudegledgudegjecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghu shculfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrth htvghrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffev gfeknecuvehluhhsthgvrhfuihiivgepfeenucfrrghrrghmpehmrghilhhfrhhomhepih htshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 14 Sep 2021 16:38:21 -0400 (EDT) From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH RFC 13/13] hw/nvme: add attached-namespaces prop Date: Tue, 14 Sep 2021 22:37:37 +0200 Message-Id: <20210914203737.182571-14-its@irrelevant.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210914203737.182571-1-its@irrelevant.dk> References: <20210914203737.182571-1-its@irrelevant.dk> MIME-Version: 1.0 Received-SPF: pass client-ip=64.147.123.19; envelope-from=its@irrelevant.dk; helo=wout3-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , qemu-block@nongnu.org, Klaus Jensen , Markus Armbruster , Klaus Jensen , Hanna Reitz , Stefan Hajnoczi , Keith Busch , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Klaus Jensen Add a runtime property to get a list of attached namespaces per controller. Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index 04e564ad6be6..ed867384e40a 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -6746,6 +6746,27 @@ static void nvme_set_smart_warning(Object *obj, Visitor *v, const char *name, } } +static void get_attached_namespaces(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + NvmeState *n = NVME_STATE(obj); + strList *paths = NULL; + strList **tail = &paths; + int nsid; + + for (nsid = 1; nsid <= NVME_MAX_NAMESPACES; nsid++) { + NvmeNamespace *ns = nvme_ns(n, nsid); + if (!ns) { + continue; + } + + QAPI_LIST_APPEND(tail, object_get_canonical_path(OBJECT(ns))); + } + + visit_type_strList(v, name, &paths, errp); + qapi_free_strList(paths); +} + static const VMStateDescription nvme_vmstate = { .name = "nvme", .unmigratable = 1, @@ -6771,6 +6792,9 @@ static void nvme_state_instance_init(Object *obj) object_property_add(obj, "smart_critical_warning", "uint8", nvme_get_smart_warning, nvme_set_smart_warning, NULL, NULL); + + object_property_add(obj, "attached-namespaces", "str", + get_attached_namespaces, NULL, NULL, NULL); } static const TypeInfo nvme_state_info = {