From patchwork Thu Oct 10 18:24:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 1995824 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=1H6HAXKL; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20230601 header.b=ge/KVFe5; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XPmH95C0Mz1xsc for ; Fri, 11 Oct 2024 10:31:33 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:Reply-To:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID :References:Mime-Version:In-Reply-To:Date:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kKb6jMc0LFEnU6EdrnIkDqFsCkZSiqrGC3iuSAfdE/c=; b=1H6HAXKLh8dOuI HEavd5dUy8CvUak4pxzi0PqCosHoyE/IbCEYvWrjo4z0W0+SCKuyid4DtNYtaMP7FIRkEOvrkpEe4 krKjz1V5LAEvxUNenePIrM0y01Qv53/BsPgAJ7udq71OP7QB8ldiUw4Qp6WZds6+TZicNNvld5+rt CimBejLZyjiAT0OnO6eJtyr09kS9x4Pc+QV1whobAiJQPgD5e/5a86EmZ7svmHaRxkDlhHVL34tjC f5irIbOFvOFnnMQfDsc5deVnAxOYXNKaDlsCc5Q7HCbiEKrUQ+9r0uZrx3N5GKymm3D/GVVtH8R/j U3yXFO3RvHQ8CVV8GtgQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98 #2 (Red Hat Linux)) id 1sz2d2-0000000Ef6F-221r; Thu, 10 Oct 2024 23:31:32 +0000 Received: from mail-pg1-x54a.google.com ([2607:f8b0:4864:20::54a]) by bombadil.infradead.org with esmtps (Exim 4.98 #2 (Red Hat Linux)) id 1syxsx-0000000Drnw-1n0n for kvm-riscv@lists.infradead.org; Thu, 10 Oct 2024 18:27:41 +0000 Received: by mail-pg1-x54a.google.com with SMTP id 41be03b00d2f7-5e4df21f22dso1173938a12.0 for ; Thu, 10 Oct 2024 11:27:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1728584858; x=1729189658; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=43Dd+lWRPu/QQcFLGDLhCyyiH7uAqu5vbmltTcj5okg=; b=ge/KVFe5o7m0CrYItpSkaRrqEs6HCvjuf4dtPzfv98KYlQGh/QLeT5Z5fSRGccyiyM L5Z9adg+js6Zw4rVdQ2gjoah1thpDjUSIdwA5BEg90iA0A65+u/GFFXrzuRt0RMsF8nc FjEi9xlv7xPIBxHYIHGj0TGsieacYRgZvJkfGXimSosPbUmJj+3ajkP7+MzdBEKWOnvp SHbcEfX0hrL5Pr7biZ1dmkKwIXNL+Zbo7SPItkvjjlFS+KCoBI062lzohEowRpVeDVTa 7NPcGhOmHlAyjnfSrXiHN7Vs52TomwHW0GC81PbeGkG4HVA6M8JwCgLDg1dR+4rNISfj iZjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728584858; x=1729189658; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=43Dd+lWRPu/QQcFLGDLhCyyiH7uAqu5vbmltTcj5okg=; b=nv8n2y/mZ8oi641M0ih0auQLOXYvPs8VbMOefy/0VtugUqgF9YSioHlDa8UI+hDa3O hvi/XGW5vHlwTr9AtNivbz2lDtTUQfmSrVZ4SGENZxHCHUh5fdMWjygJRZHCnKBcIqdY s1741oDGF0SbbrQcv0TpOFheap3VVtxIEATgIgAPXonWx+3Hh1HkJihSU4Ypjmsck143 ZYV70BDbiLVWbsKO8D2TJSxQPnUfkHTyVo+8FgtLBdOm6zXRNktb8A/1UVTE7PDRBod+ X/VUnc+Pn5KtsN+sdEr6hUQcPc870Q8QsCygsY6z2rfBgbO1F2MHV+aWnSw1Osv1yW6Y nwpA== X-Forwarded-Encrypted: i=1; AJvYcCVT1deWFfCfXq62urost5h/m0hLjXfEFK1dyWWDS9xRr3MTJE+3YG8HqXgdY+Kz+ZJEBcNvpOrV+/4=@lists.infradead.org X-Gm-Message-State: AOJu0YyXz6mLXFumYEsfchcxBFlIy2Y0e0qbM/5rWGXscct1XE+oNv5l e6Z/2j/dYVuCu70P6SkG91MgUBNSRjjMXNYRfKERtKg+q20mBn3JnBtJJwQ0QhnM8xxYTjJpUtZ 4dg== X-Google-Smtp-Source: AGHT+IGjs5y0lK7jvR8/zbNWJNDHfDWRUWSPa98vwOnU2bZ6sNQJgoH/6j7HDiYxyC+lHu033MhghQMujcY= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:9d:3983:ac13:c240]) (user=seanjc job=sendgmr) by 2002:a63:b208:0:b0:7e6:c3f2:24c2 with SMTP id 41be03b00d2f7-7ea535dce6cmr30a12.6.1728584857741; Thu, 10 Oct 2024 11:27:37 -0700 (PDT) Date: Thu, 10 Oct 2024 11:24:20 -0700 In-Reply-To: <20241010182427.1434605-1-seanjc@google.com> Mime-Version: 1.0 References: <20241010182427.1434605-1-seanjc@google.com> X-Mailer: git-send-email 2.47.0.rc1.288.g06298d1525-goog Message-ID: <20241010182427.1434605-79-seanjc@google.com> Subject: [PATCH v13 78/85] KVM: PPC: Explicitly require struct page memory for Ultravisor sharing From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, " =?utf-8?q?Alex_Benn=C3=A9e?= " , Yan Zhao , David Matlack , David Stevens , Andrew Jones X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20241010_112739_639072_3ACAD722 X-CRM114-Status: GOOD ( 10.30 ) X-Spam-Score: -9.5 (---------) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Explicitly require "struct page" memory when sharing memory between guest and host via an Ultravisor. Given the number of pfn_to_page() calls in the code, it's safe to assume that KVM already requires [...] Content analysis details: (-9.5 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:54a listed in] [list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record -7.5 USER_IN_DEF_DKIM_WL From: address is in the default DKIM welcome-list -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.0 DKIMWL_WL_MED DKIMwl.org - Medium trust sender X-BeenThere: kvm-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: Sean Christopherson Sender: "kvm-riscv" Errors-To: kvm-riscv-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Explicitly require "struct page" memory when sharing memory between guest and host via an Ultravisor. Given the number of pfn_to_page() calls in the code, it's safe to assume that KVM already requires that the pfn returned by gfn_to_pfn() is backed by struct page, i.e. this is likely a bug fix, not a reduction in KVM capabilities. Switching to gfn_to_page() will eventually allow removing gfn_to_pfn() and kvm_pfn_to_refcounted_page(). Signed-off-by: Sean Christopherson --- arch/powerpc/kvm/book3s_hv_uvmem.c | 25 ++++++++++++------------- 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c index 92f33115144b..3a6592a31a10 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -879,9 +879,8 @@ static unsigned long kvmppc_share_page(struct kvm *kvm, unsigned long gpa, { int ret = H_PARAMETER; - struct page *uvmem_page; + struct page *page, *uvmem_page; struct kvmppc_uvmem_page_pvt *pvt; - unsigned long pfn; unsigned long gfn = gpa >> page_shift; int srcu_idx; unsigned long uvmem_pfn; @@ -901,8 +900,8 @@ static unsigned long kvmppc_share_page(struct kvm *kvm, unsigned long gpa, retry: mutex_unlock(&kvm->arch.uvmem_lock); - pfn = gfn_to_pfn(kvm, gfn); - if (is_error_noslot_pfn(pfn)) + page = gfn_to_page(kvm, gfn); + if (!page) goto out; mutex_lock(&kvm->arch.uvmem_lock); @@ -911,16 +910,16 @@ static unsigned long kvmppc_share_page(struct kvm *kvm, unsigned long gpa, pvt = uvmem_page->zone_device_data; pvt->skip_page_out = true; pvt->remove_gfn = false; /* it continues to be a valid GFN */ - kvm_release_pfn_clean(pfn); + kvm_release_page_unused(page); goto retry; } - if (!uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0, + if (!uv_page_in(kvm->arch.lpid, page_to_pfn(page) << page_shift, gpa, 0, page_shift)) { kvmppc_gfn_shared(gfn, kvm); ret = H_SUCCESS; } - kvm_release_pfn_clean(pfn); + kvm_release_page_clean(page); mutex_unlock(&kvm->arch.uvmem_lock); out: srcu_read_unlock(&kvm->srcu, srcu_idx); @@ -1083,21 +1082,21 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa, int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn) { - unsigned long pfn; + struct page *page; int ret = U_SUCCESS; - pfn = gfn_to_pfn(kvm, gfn); - if (is_error_noslot_pfn(pfn)) + page = gfn_to_page(kvm, gfn); + if (!page) return -EFAULT; mutex_lock(&kvm->arch.uvmem_lock); if (kvmppc_gfn_is_uvmem_pfn(gfn, kvm, NULL)) goto out; - ret = uv_page_in(kvm->arch.lpid, pfn << PAGE_SHIFT, gfn << PAGE_SHIFT, - 0, PAGE_SHIFT); + ret = uv_page_in(kvm->arch.lpid, page_to_pfn(page) << PAGE_SHIFT, + gfn << PAGE_SHIFT, 0, PAGE_SHIFT); out: - kvm_release_pfn_clean(pfn); + kvm_release_page_clean(page); mutex_unlock(&kvm->arch.uvmem_lock); return (ret == U_SUCCESS) ? RESUME_GUEST : -EFAULT; }