From patchwork Mon May 16 23:21:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1631989 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=uAr5hPvr; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=desiato.20200630 header.b=cVGzFLLA; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=KpoeoTCV; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4L2GWM2fpfz9s5V for ; Tue, 17 May 2022 10:01:51 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=wLyl59WDZOpm6fVheTMd/JoLuWGL/75y0wkMT6ietMg=; b=uAr5hPvrp7JEmyRIxceT298hjB ZtVVddPJqp0XQAtepqrrnlCxKfxa54GsUkOdwTeyzwPnHvzda1QM+c9XmVajhxjyPQc+HmYY4AEYJ u3Mf/8uFKXE6SpeXzNR8eGvNautvmUeWQ9VjEy2Xkek8H2FwX8xVVQfGo37KApIVElSupwMvmSD6A 6R2RuWYZ4+LXjdaHuRBPEEjjl+R6AqijrLL3gkMZ1TQfGrjoG1IYK+u7VpS13bom3VhoziBUlToBa VGMyntESKkZiDRqoZjYDFTsG81XhTSHtsJ4JHW/Wgs+eECoPDufFnIEV4BAK7PIRtfgrIMCqcgd+y L6YtH2Jg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nqkeo-00AtrR-Ss; Tue, 17 May 2022 00:01:46 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nqkem-00Atpm-AL for kvm-riscv@bombadil.infradead.org; Tue, 17 May 2022 00:01:44 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Type:Cc:To:From:Subject: References:Mime-Version:Message-Id:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=UURvncKeO3sAXehwRUv/gDunrjLfV4Lf9rxel1o0OM8=; b=cVGzFLLAzBF0uPZ1gv+YPuQQIC 7UEftfsKWqlFzOaMtYDkiB+wz9w2JCbOLGJogL44HkJrNLwvMpSDhxD6beSRouaeg36dBPsw/ipcu mbPWXpKDTx7yE+mfzqSsLBhnmUYDgaE4VNBNUn6QSYOLshzinzvbDLRyHnP66aiahaWExXaMuFAxw 3vnQ7XO9Hm9KGoGSrAUWRuyjikJuc/gqcWqMeVpLO9SEQ4cBnJlRa8gCKlTiOFeOddbVcpEmqXbFh naduASv+qgI2gA5j2a67ZafNax0E8gjbXqQaLc4eJRftlOO5JuiAluJFlmhJ5GUk/RUsdBFaZSsjL gPLfLLmA==; Received: from mail-pf1-x44a.google.com ([2607:f8b0:4864:20::44a]) by desiato.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nqk2d-0013MB-S2 for kvm-riscv@lists.infradead.org; Mon, 16 May 2022 23:22:22 +0000 Received: by mail-pf1-x44a.google.com with SMTP id i127-20020a625485000000b0050d3d1cab5fso6797281pfb.5 for ; Mon, 16 May 2022 16:22:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=UURvncKeO3sAXehwRUv/gDunrjLfV4Lf9rxel1o0OM8=; b=KpoeoTCVfHu3FNGmamcPwJadyU2TWAS8aaIl+vMTmobYEd92bezU7ZnG3bz3soMSuK ivbMGSkVpz/pfyD/H0kosgN8OX8CulFqeX0UZUPxnqlgbjBx0eCabhy5h9FS5MgXsR06 DZS7CeU9Sp3v6NxgwYYrcwhgCN5ErFTNB/tzZ2UtEJa5EXHHziIhH9QB21xBUqLAsI9W VbqFJ1bv9BVGBXZyvGPyVG6KXKuPXCXIDjBKjW2qQvVK1YTGa7bWQt86Prpqz064stv3 tqPPyH9U4lwBSIeKhdCWYgsLoXOpYQ+GcC0EAXG6PiwlNGIyP0jco2HTmrFKkJ0ObFwK shZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=UURvncKeO3sAXehwRUv/gDunrjLfV4Lf9rxel1o0OM8=; b=s470lk+Zlh03+c7ZCJ+LuyE4sDto8klf0zZ4U7s3Q8ACle7LwKJy7rIKYlAP2S6kSB gZmZ/v3s1KyOEUqaYcqL2WoSlzUCAKGSlanFmSONkUZJ7HaWGPp7rUVlv5cZCEu66K0j eHWExGIW4VFZ5P59ZqFJU/SZdW6+8751QHtS6NaKk6LahRftlRjXGAwuhXsvDlc1Ifai cRGQmfT/yNOdH6D5SDWzls/Fl8Lrrwt9tPpVezxckKGbdPRJFHfC/qQz6xrbNl6T9v7f aMHF3o3Ixlib0WlvcWLur7bXwBJWWvrguGlPLG2pK/Tlfinb7JPmZ0lQrIhUR8hYChv7 0AXQ== X-Gm-Message-State: AOAM532sa4x3WTbd76Xd9OvQ4r2alRQHQVcXP6od0wW1HP8LW60rR2/B SvAvKrFPBrPsB7jKMw9B3XQ5pupUT+6bNA== X-Google-Smtp-Source: ABdhPJzaegKoAGTelKJdtAGJ+meljK9CMlvjRnSgskvqhbOFl0BuCne/24kQsRMxOY1jF7J3R50T5vN8ai28Vw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a17:90a:8d83:b0:1dd:258c:7c55 with SMTP id d3-20020a17090a8d8300b001dd258c7c55mr4006pjo.1.1652743334892; Mon, 16 May 2022 16:22:14 -0700 (PDT) Date: Mon, 16 May 2022 23:21:37 +0000 In-Reply-To: <20220516232138.1783324-1-dmatlack@google.com> Message-Id: <20220516232138.1783324-22-dmatlack@google.com> Mime-Version: 1.0 References: <20220516232138.1783324-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [PATCH v6 21/22] KVM: Allow for different capacities in kvm_mmu_memory_cache structs From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , Lai Jiangshan , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220517_002220_523526_E2057729 X-CRM114-Status: GOOD ( 19.56 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "desiato.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Allow the capacity of the kvm_mmu_memory_cache struct to be chosen at declaration time rather than being fixed for all declarations. This will be used in a follow-up commit to declare an cache in x86 [...] Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:44a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.0 T_SCC_BODY_TEXT_LINE No description available. -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Allow the capacity of the kvm_mmu_memory_cache struct to be chosen at declaration time rather than being fixed for all declarations. This will be used in a follow-up commit to declare an cache in x86 with a capacity of 512+ objects without having to increase the capacity of all caches in KVM. This change requires each cache now specify its capacity at runtime, since the cache struct itself no longer has a fixed capacity known at compile time. To protect against someone accidentally defining a kvm_mmu_memory_cache struct directly (without the extra storage), this commit includes a WARN_ON() in kvm_mmu_topup_memory_cache(). In order to support different capacities, this commit changes the objects pointer array to be dynamically allocated the first time the cache is topped-up. While here, opportunistically clean up the stack-allocated kvm_mmu_memory_cache structs in riscv and arm64 to use designated initializers. No functional change intended. Reviewed-by: Marc Zyngier Signed-off-by: David Matlack Reviewed-by: Anup Patel --- arch/arm64/kvm/mmu.c | 2 +- arch/riscv/kvm/mmu.c | 5 +---- include/linux/kvm_types.h | 6 +++++- virt/kvm/kvm_main.c | 33 ++++++++++++++++++++++++++++++--- 4 files changed, 37 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 53ae2c0640bc..f443ed845f85 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -764,7 +764,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, { phys_addr_t addr; int ret = 0; - struct kvm_mmu_memory_cache cache = { 0, __GFP_ZERO, NULL, }; + struct kvm_mmu_memory_cache cache = { .gfp_zero = __GFP_ZERO }; struct kvm_pgtable *pgt = kvm->arch.mmu.pgt; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE | KVM_PGTABLE_PROT_R | diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index f80a34fbf102..4d95ebe4114f 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -347,10 +347,7 @@ static int stage2_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, int ret = 0; unsigned long pfn; phys_addr_t addr, end; - struct kvm_mmu_memory_cache pcache; - - memset(&pcache, 0, sizeof(pcache)); - pcache.gfp_zero = __GFP_ZERO; + struct kvm_mmu_memory_cache pcache = { .gfp_zero = __GFP_ZERO }; end = (gpa + size + PAGE_SIZE - 1) & PAGE_MASK; pfn = __phys_to_pfn(hpa); diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index ac1ebb37a0ff..68529884eaf8 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -83,12 +83,16 @@ struct gfn_to_pfn_cache { * MMU flows is problematic, as is triggering reclaim, I/O, etc... while * holding MMU locks. Note, these caches act more like prefetch buffers than * classical caches, i.e. objects are not returned to the cache on being freed. + * + * The @capacity field and @objects array are lazily initialized when the cache + * is topped up (__kvm_mmu_topup_memory_cache()). */ struct kvm_mmu_memory_cache { int nobjs; gfp_t gfp_zero; struct kmem_cache *kmem_cache; - void *objects[KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE]; + int capacity; + void **objects; }; #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e089db822c12..5e2e75014256 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -369,14 +369,31 @@ static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc, return (void *)__get_free_page(gfp_flags); } -int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) +static int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int capacity, int min) { + gfp_t gfp = GFP_KERNEL_ACCOUNT; void *obj; if (mc->nobjs >= min) return 0; - while (mc->nobjs < ARRAY_SIZE(mc->objects)) { - obj = mmu_memory_cache_alloc_obj(mc, GFP_KERNEL_ACCOUNT); + + if (unlikely(!mc->objects)) { + if (WARN_ON_ONCE(!capacity)) + return -EIO; + + mc->objects = kvmalloc_array(sizeof(void *), capacity, gfp); + if (!mc->objects) + return -ENOMEM; + + mc->capacity = capacity; + } + + /* It is illegal to request a different capacity across topups. */ + if (WARN_ON_ONCE(mc->capacity != capacity)) + return -EIO; + + while (mc->nobjs < mc->capacity) { + obj = mmu_memory_cache_alloc_obj(mc, gfp); if (!obj) return mc->nobjs >= min ? 0 : -ENOMEM; mc->objects[mc->nobjs++] = obj; @@ -384,6 +401,11 @@ int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) return 0; } +int kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int min) +{ + return __kvm_mmu_topup_memory_cache(mc, KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE, min); +} + int kvm_mmu_memory_cache_nr_free_objects(struct kvm_mmu_memory_cache *mc) { return mc->nobjs; @@ -397,6 +419,11 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc) else free_page((unsigned long)mc->objects[--mc->nobjs]); } + + kvfree(mc->objects); + + mc->objects = NULL; + mc->capacity = 0; } void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc)