From patchwork Fri Apr 22 21:05:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Matlack X-Patchwork-Id: 1621202 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=FrPfyG/S; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20210112 header.b=R5R30ofn; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4KlRlm71lFz9s3q for ; Sat, 23 Apr 2022 07:06:12 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:References: Mime-Version:Message-Id:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=zVBtVSBnyqg668Z/rYdRjK81neBj+V+bz+kX4PnUVuA=; b=FrPfyG/S8BQcHSVNocI0PCG+ii uuHyFbW7d4wusM9fWjrd+JBMHhUwJpR3PPHf7QmD+KbP/NV7pktKExpbYp41VxfGiomK3s1M5ptNV wvyypdBiN6TEodccrrIwsgSmIufInvz4tFFWoTg9olj8OvbGa/o8PNDUSCUNhME+m2Zg5GyObLUHN eX3+iwgvhEUC42+mk6Ta4inPxIAiyGl8pf9BO0QOZAIX3wY6vwcn4nkzyqzUdVmZpxDaiIScoGc6e /L3gu8WUEPHwcQo1qRhreXBO3aBUdE4U8bHFBdWY2IX9edn1OdFl5EFPy5icn24p4pxq53w6Bo5mI ecGTYvBg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ni0Tj-002KsD-8G; Fri, 22 Apr 2022 21:06:11 +0000 Received: from mail-pl1-x649.google.com ([2607:f8b0:4864:20::649]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ni0Tg-002KoT-2T for kvm-riscv@lists.infradead.org; Fri, 22 Apr 2022 21:06:09 +0000 Received: by mail-pl1-x649.google.com with SMTP id y12-20020a17090322cc00b001590b19fb1fso5384228plg.16 for ; Fri, 22 Apr 2022 14:06:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=kWwdM6A6P8/IuSsS3ESiLqrrdzrHfrWJ/AAbUmXSl2Q=; b=R5R30ofn1vT0B52vHAVy90sP8NWeAZk6uMpASj40sPxLLgzd2ZuOhesc6L/vaSciGX 87A3/3Qkbiw1Jk6/rP7+fnq2RGGkxxQAxUHvI0b1BWtmszbA0LwIv3qS7ZaWzCx0KLaL VK5tmwOU/bgQEzGPKc94RnF/A3q/3k7LFK3VARKssk6TAFHeYXKTzl0zRj3Q3p3xHuKj ASu3Vi4NipZHZTd78zSj5VH0uaJlnVnRVoaWpMPez2YoRLrby2DeUwBQ56WSIMTywDqq QSQk+xMkFSDdB3EgCE3L0r1ViEw6ieR9wK4UKK8wJZZxeRSsEhnpihM85gN+RwfYexCl p7iw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kWwdM6A6P8/IuSsS3ESiLqrrdzrHfrWJ/AAbUmXSl2Q=; b=OYf2ah351mKxbCcRFacqW/GyF3+JDuPfbBBZltI6PznZzgKgBDg4ScicR1+TNH4fw1 hprCsuW/Qz3J3Ws0Qklhg2TpgOw+ZlDFZ9sbVSToDPHtUjse4WW3ZnBH2qdAUh0tYNxr 5ZS2yH84A9WDeHpz9ILI0DNJ3IYKsPpMT6y4MDQl9y0TJTlU7V3un++if8T+3rUk/07i VXRJs6QbIItxdJ0KvjSsrAJIJL6sFmXRADl+3trmFPmqefxt/che1bo7CCDPKyb2MZQ1 pnxi4FtQjC79vRIkA40BTyWCAI4PoeP2ZI6Yei21+tZkfPxxBFLb1STmgo5ZlHYx0Muw sjoQ== X-Gm-Message-State: AOAM5337gXacTIvYGHsCKuJm4z1aTIsEhkvIpe+TrbIqKSHEt+IxMNPh RyES4P6IBsu4C7kK73zdrM6QjCUKuV0C6w== X-Google-Smtp-Source: ABdhPJyrKsBSTehaAqfzYOFua5pU71lBgPsCafuSicdpbjxOGLjE18EgDYm93QHLrLLvseNhpIztJ+Y4kBHGiw== X-Received: from dmatlack-heavy.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:19cd]) (user=dmatlack job=sendgmr) by 2002:a05:6a00:10d0:b0:4f7:5af4:47b6 with SMTP id d16-20020a056a0010d000b004f75af447b6mr6733898pfu.6.1650661566123; Fri, 22 Apr 2022 14:06:06 -0700 (PDT) Date: Fri, 22 Apr 2022 21:05:37 +0000 In-Reply-To: <20220422210546.458943-1-dmatlack@google.com> Message-Id: <20220422210546.458943-12-dmatlack@google.com> Mime-Version: 1.0 References: <20220422210546.458943-1-dmatlack@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v4 11/20] KVM: x86/mmu: Allow for NULL vcpu pointer in __kvm_mmu_get_shadow_page() From: David Matlack To: Paolo Bonzini Cc: Marc Zyngier , Huacai Chen , Aleksandar Markovic , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , Andrew Jones , Ben Gardon , Peter Xu , maciej.szmigiero@oracle.com, "moderated list:KERNEL VIRTUAL MACHINE FOR ARM64 (KVM/arm64)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)" , "open list:KERNEL VIRTUAL MACHINE FOR RISC-V (KVM/riscv)" , Peter Feiner , David Matlack X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220422_140608_139248_3096540B X-CRM114-Status: GOOD ( 14.94 ) X-Spam-Score: -7.7 (-------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Allow the vcpu pointer in __kvm_mmu_get_shadow_page() to be NULL. Rename it to vcpu_or_null to prevent future commits from accidentally taking dependency on it without first considering the NULL case. Content analysis details: (-7.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:649 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM white-list -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Allow the vcpu pointer in __kvm_mmu_get_shadow_page() to be NULL. Rename it to vcpu_or_null to prevent future commits from accidentally taking dependency on it without first considering the NULL case. The vcpu pointer is only used for syncing indirect shadow pages in kvm_mmu_find_shadow_page(). A vcpu pointer it not required for correctness since unsync pages can simply be zapped. But this should never occur in practice, since the only use-case for passing a NULL vCPU pointer is eager page splitting which will only request direct shadow pages (which can never be unsync). Even though __kvm_mmu_get_shadow_page() can gracefully handle a NULL vcpu, add a WARN() that will fire if __kvm_mmu_get_shadow_page() is ever called to get an indirect shadow page with a NULL vCPU pointer, since zapping unsync SPs is a performance overhead that should be considered. Signed-off-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 40 ++++++++++++++++++++++++++++++++-------- 1 file changed, 32 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 04029c01aebd..21407bd4435a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1845,16 +1845,27 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)]) \ if ((_sp)->gfn != (_gfn) || (_sp)->role.direct) {} else -static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, - struct list_head *invalid_list) +static int __kvm_sync_page(struct kvm *kvm, struct kvm_vcpu *vcpu_or_null, + struct kvm_mmu_page *sp, + struct list_head *invalid_list) { - int ret = vcpu->arch.mmu->sync_page(vcpu, sp); + int ret = -1; + + if (vcpu_or_null) + ret = vcpu_or_null->arch.mmu->sync_page(vcpu_or_null, sp); if (ret < 0) - kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list); + kvm_mmu_prepare_zap_page(kvm, sp, invalid_list); + return ret; } +static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, + struct list_head *invalid_list) +{ + return __kvm_sync_page(vcpu->kvm, vcpu, sp, invalid_list); +} + static bool kvm_mmu_remote_flush_or_zap(struct kvm *kvm, struct list_head *invalid_list, bool remote_flush) @@ -2004,7 +2015,7 @@ static void clear_sp_write_flooding_count(u64 *spte) } static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm *kvm, - struct kvm_vcpu *vcpu, + struct kvm_vcpu *vcpu_or_null, gfn_t gfn, struct hlist_head *sp_list, union kvm_mmu_page_role role) @@ -2053,7 +2064,7 @@ static struct kvm_mmu_page *kvm_mmu_find_shadow_page(struct kvm *kvm, * If the sync fails, the page is zapped. If so, break * in order to rebuild it. */ - ret = kvm_sync_page(vcpu, sp, &invalid_list); + ret = __kvm_sync_page(kvm, vcpu_or_null, sp, &invalid_list); if (ret < 0) break; @@ -2120,7 +2131,7 @@ static struct kvm_mmu_page *kvm_mmu_alloc_shadow_page(struct kvm *kvm, } static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm *kvm, - struct kvm_vcpu *vcpu, + struct kvm_vcpu *vcpu_or_null, struct shadow_page_caches *caches, gfn_t gfn, union kvm_mmu_page_role role) @@ -2129,9 +2140,22 @@ static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm *kvm, struct kvm_mmu_page *sp; bool created = false; + /* + * A vCPU pointer should always be provided when getting indirect + * shadow pages, as that shadow page may already exist and need to be + * synced using the vCPU pointer (see __kvm_sync_page()). Direct shadow + * pages are never unsync and thus do not require a vCPU pointer. + * + * No need to panic here as __kvm_sync_page() falls back to zapping an + * unsync page if the vCPU pointer is NULL. But still WARN() since + * such zapping will impact performance and this situation is never + * expected to occur in practice. + */ + WARN_ON(!vcpu_or_null && !role.direct); + sp_list = &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)]; - sp = kvm_mmu_find_shadow_page(kvm, vcpu, gfn, sp_list, role); + sp = kvm_mmu_find_shadow_page(kvm, vcpu_or_null, gfn, sp_list, role); if (!sp) { created = true; sp = kvm_mmu_alloc_shadow_page(kvm, caches, gfn, sp_list, role);