From patchwork Mon Jul 27 18:07:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ram Pai X-Patchwork-Id: 1337129 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BFnrx3T0Zz9sRW for ; Tue, 28 Jul 2020 04:09:53 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=us.ibm.com Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BFnrw5GjCzDqQX for ; Tue, 28 Jul 2020 04:09:52 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=us.ibm.com (client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=linuxram@us.ibm.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=us.ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BFnpT2FyFzF0Ys for ; Tue, 28 Jul 2020 04:07:44 +1000 (AEST) Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06RI32Xs181918; Mon, 27 Jul 2020 14:07:37 -0400 Received: from ppma04ams.nl.ibm.com (63.31.33a9.ip4.static.sl-reverse.com [169.51.49.99]) by mx0b-001b2d01.pphosted.com with ESMTP id 32hsqf3v0m-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 27 Jul 2020 14:07:37 -0400 Received: from pps.filterd (ppma04ams.nl.ibm.com [127.0.0.1]) by ppma04ams.nl.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 06RI5Hra009792; Mon, 27 Jul 2020 18:07:35 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma04ams.nl.ibm.com with ESMTP id 32gcy4jh73-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 27 Jul 2020 18:07:35 +0000 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 06RI7WwF54788224 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 27 Jul 2020 18:07:32 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8A8E2A4040; Mon, 27 Jul 2020 18:07:32 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 56E53A4055; Mon, 27 Jul 2020 18:07:29 +0000 (GMT) Received: from oc0525413822.ibm.com (unknown [9.163.69.7]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTP; Mon, 27 Jul 2020 18:07:29 +0000 (GMT) From: Ram Pai To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 1/5] KVM: PPC: Book3S HV: Fix function definition in book3s_hv_uvmem.c Date: Mon, 27 Jul 2020 11:07:14 -0700 Message-Id: <1595873238-26184-2-git-send-email-linuxram@us.ibm.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1595873238-26184-1-git-send-email-linuxram@us.ibm.com> References: <1595873238-26184-1-git-send-email-linuxram@us.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-07-27_13:2020-07-27, 2020-07-27 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 malwarescore=0 priorityscore=1501 phishscore=0 spamscore=0 mlxscore=0 lowpriorityscore=0 bulkscore=0 mlxlogscore=999 adultscore=0 clxscore=1015 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2007270122 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: ldufour@linux.ibm.com, linuxram@us.ibm.com, cclaudio@linux.ibm.com, bharata@linux.ibm.com, sathnaga@linux.vnet.ibm.com, aneesh.kumar@linux.ibm.com, sukadev@linux.vnet.ibm.com, bauerman@linux.ibm.com, david@gibson.dropbear.id.au Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Without this fix, git is confused. It generates wrong function context for code changes in subsequent patches. Weird, but true. Cc: Paul Mackerras Cc: Benjamin Herrenschmidt Cc: Michael Ellerman Cc: Bharata B Rao Cc: Aneesh Kumar K.V Cc: Sukadev Bhattiprolu Cc: Laurent Dufour Cc: Thiago Jung Bauermann Cc: David Gibson Cc: Claudio Carvalho Cc: kvm-ppc@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Ram Pai --- arch/powerpc/kvm/book3s_hv_uvmem.c | 21 ++++++++++----------- 1 file changed, 10 insertions(+), 11 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c index 09d8119..e6f76bc 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -382,8 +382,7 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm) * Alloc a PFN from private device memory pool and copy page from normal * memory to secure memory using UV_PAGE_IN uvcall. */ -static int -kvmppc_svm_page_in(struct vm_area_struct *vma, unsigned long start, +static int kvmppc_svm_page_in(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long gpa, struct kvm *kvm, unsigned long page_shift, bool *downgrade) { @@ -450,8 +449,8 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm) * In the former case, uses dev_pagemap_ops.migrate_to_ram handler * to unmap the device page from QEMU's page tables. */ -static unsigned long -kvmppc_share_page(struct kvm *kvm, unsigned long gpa, unsigned long page_shift) +static unsigned long kvmppc_share_page(struct kvm *kvm, unsigned long gpa, + unsigned long page_shift) { int ret = H_PARAMETER; @@ -500,9 +499,9 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm) * H_PAGE_IN_SHARED flag makes the page shared which means that the same * memory in is visible from both UV and HV. */ -unsigned long -kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa, - unsigned long flags, unsigned long page_shift) +unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa, + unsigned long flags, + unsigned long page_shift) { bool downgrade = false; unsigned long start, end; @@ -559,10 +558,10 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm) * Provision a new page on HV side and copy over the contents * from secure memory using UV_PAGE_OUT uvcall. */ -static int -kvmppc_svm_page_out(struct vm_area_struct *vma, unsigned long start, - unsigned long end, unsigned long page_shift, - struct kvm *kvm, unsigned long gpa) +static int kvmppc_svm_page_out(struct vm_area_struct *vma, + unsigned long start, + unsigned long end, unsigned long page_shift, + struct kvm *kvm, unsigned long gpa) { unsigned long src_pfn, dst_pfn = 0; struct migrate_vma mig; From patchwork Mon Jul 27 18:07:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ram Pai X-Patchwork-Id: 1337134 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BFp1j3gPdz9sRk for ; Tue, 28 Jul 2020 04:17:29 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=us.ibm.com Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BFp1j202lzF1bv for ; Tue, 28 Jul 2020 04:17:29 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=us.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=linuxram@us.ibm.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=us.ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BFnpZ0lM1zDrdH for ; Tue, 28 Jul 2020 04:07:49 +1000 (AEST) Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06RI25HS006789; Mon, 27 Jul 2020 14:07:42 -0400 Received: from ppma02fra.de.ibm.com (47.49.7a9f.ip4.static.sl-reverse.com [159.122.73.71]) by mx0a-001b2d01.pphosted.com with ESMTP id 32j2paj7sj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 27 Jul 2020 14:07:41 -0400 Received: from pps.filterd (ppma02fra.de.ibm.com [127.0.0.1]) by ppma02fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 06RI6QkR006570; Mon, 27 Jul 2020 18:07:39 GMT Received: from b06cxnps3075.portsmouth.uk.ibm.com (d06relay10.portsmouth.uk.ibm.com [9.149.109.195]) by ppma02fra.de.ibm.com with ESMTP id 32gcq0shrk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 27 Jul 2020 18:07:39 +0000 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 06RI7aL530802276 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 27 Jul 2020 18:07:36 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 24A9AA4055; Mon, 27 Jul 2020 18:07:36 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id EA2B0A4051; Mon, 27 Jul 2020 18:07:32 +0000 (GMT) Received: from oc0525413822.ibm.com (unknown [9.163.69.7]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTP; Mon, 27 Jul 2020 18:07:32 +0000 (GMT) From: Ram Pai To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 2/5] KVM: PPC: Book3S HV: Disable page merging in H_SVM_INIT_START Date: Mon, 27 Jul 2020 11:07:15 -0700 Message-Id: <1595873238-26184-3-git-send-email-linuxram@us.ibm.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1595873238-26184-1-git-send-email-linuxram@us.ibm.com> References: <1595873238-26184-1-git-send-email-linuxram@us.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-07-27_12:2020-07-27, 2020-07-27 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 adultscore=0 spamscore=0 impostorscore=0 mlxlogscore=999 clxscore=1015 priorityscore=1501 mlxscore=0 suspectscore=2 lowpriorityscore=0 phishscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2007270118 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: ldufour@linux.ibm.com, linuxram@us.ibm.com, cclaudio@linux.ibm.com, bharata@linux.ibm.com, sathnaga@linux.vnet.ibm.com, aneesh.kumar@linux.ibm.com, sukadev@linux.vnet.ibm.com, bauerman@linux.ibm.com, david@gibson.dropbear.id.au Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Page-merging of pages in memory-slots associated with a Secure VM, is disabled in H_SVM_PAGE_IN handler. This operation should have been done the much earlier; the moment the VM is initiated for secure-transition. Delaying this operation, increases the probability for those pages to acquire new references , making it impossible to migrate those pages in H_SVM_PAGE_IN handler. Disable page-migration in H_SVM_INIT_START handling. Reviewed-by: Bharata B Rao Signed-off-by: Ram Pai --- Documentation/powerpc/ultravisor.rst | 1 + arch/powerpc/kvm/book3s_hv_uvmem.c | 123 +++++++++++++++++++++++++---------- 2 files changed, 89 insertions(+), 35 deletions(-) diff --git a/Documentation/powerpc/ultravisor.rst b/Documentation/powerpc/ultravisor.rst index df136c8..a1c8c37 100644 --- a/Documentation/powerpc/ultravisor.rst +++ b/Documentation/powerpc/ultravisor.rst @@ -895,6 +895,7 @@ Return values One of the following values: * H_SUCCESS on success. + * H_STATE if the VM is not in a position to switch to secure. Description ~~~~~~~~~~~ diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c index e6f76bc..533b608 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -211,10 +211,79 @@ static bool kvmppc_gfn_is_uvmem_pfn(unsigned long gfn, struct kvm *kvm, return false; } +static int kvmppc_memslot_page_merge(struct kvm *kvm, + const struct kvm_memory_slot *memslot, bool merge) +{ + unsigned long gfn = memslot->base_gfn; + unsigned long end, start = gfn_to_hva(kvm, gfn); + int ret = 0; + struct vm_area_struct *vma; + int merge_flag = (merge) ? MADV_MERGEABLE : MADV_UNMERGEABLE; + + if (kvm_is_error_hva(start)) + return H_STATE; + + end = start + (memslot->npages << PAGE_SHIFT); + + mmap_write_lock(kvm->mm); + do { + vma = find_vma_intersection(kvm->mm, start, end); + if (!vma) { + ret = H_STATE; + break; + } + ret = ksm_madvise(vma, vma->vm_start, vma->vm_end, + merge_flag, &vma->vm_flags); + if (ret) { + ret = H_STATE; + break; + } + start = vma->vm_end; + } while (end > vma->vm_end); + + mmap_write_unlock(kvm->mm); + return ret; +} + +static void kvmppc_uvmem_memslot_delete(struct kvm *kvm, + const struct kvm_memory_slot *memslot) +{ + uv_unregister_mem_slot(kvm->arch.lpid, memslot->id); + kvmppc_uvmem_slot_free(kvm, memslot); + kvmppc_memslot_page_merge(kvm, memslot, true); +} + +static int kvmppc_uvmem_memslot_create(struct kvm *kvm, + const struct kvm_memory_slot *memslot) +{ + int ret = H_PARAMETER; + + if (kvmppc_memslot_page_merge(kvm, memslot, false)) + return ret; + + if (kvmppc_uvmem_slot_init(kvm, memslot)) + goto out1; + + ret = uv_register_mem_slot(kvm->arch.lpid, + memslot->base_gfn << PAGE_SHIFT, + memslot->npages * PAGE_SIZE, + 0, memslot->id); + if (ret < 0) { + ret = H_PARAMETER; + goto out; + } + return 0; +out: + kvmppc_uvmem_slot_free(kvm, memslot); +out1: + kvmppc_memslot_page_merge(kvm, memslot, true); + return ret; +} + unsigned long kvmppc_h_svm_init_start(struct kvm *kvm) { struct kvm_memslots *slots; - struct kvm_memory_slot *memslot; + struct kvm_memory_slot *memslot, *m; int ret = H_SUCCESS; int srcu_idx; @@ -232,23 +301,24 @@ unsigned long kvmppc_h_svm_init_start(struct kvm *kvm) return H_AUTHORITY; srcu_idx = srcu_read_lock(&kvm->srcu); + + /* register the memslot */ slots = kvm_memslots(kvm); kvm_for_each_memslot(memslot, slots) { - if (kvmppc_uvmem_slot_init(kvm, memslot)) { - ret = H_PARAMETER; - goto out; - } - ret = uv_register_mem_slot(kvm->arch.lpid, - memslot->base_gfn << PAGE_SHIFT, - memslot->npages * PAGE_SIZE, - 0, memslot->id); - if (ret < 0) { - kvmppc_uvmem_slot_free(kvm, memslot); - ret = H_PARAMETER; - goto out; + ret = kvmppc_uvmem_memslot_create(kvm, memslot); + if (ret) + break; + } + + if (ret) { + slots = kvm_memslots(kvm); + kvm_for_each_memslot(m, slots) { + if (m == memslot) + break; + kvmppc_uvmem_memslot_delete(kvm, memslot); } } -out: + srcu_read_unlock(&kvm->srcu, srcu_idx); return ret; } @@ -384,7 +454,7 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm) */ static int kvmppc_svm_page_in(struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long gpa, struct kvm *kvm, - unsigned long page_shift, bool *downgrade) + unsigned long page_shift) { unsigned long src_pfn, dst_pfn = 0; struct migrate_vma mig; @@ -400,18 +470,6 @@ static int kvmppc_svm_page_in(struct vm_area_struct *vma, unsigned long start, mig.src = &src_pfn; mig.dst = &dst_pfn; - /* - * We come here with mmap_lock write lock held just for - * ksm_madvise(), otherwise we only need read mmap_lock. - * Hence downgrade to read lock once ksm_madvise() is done. - */ - ret = ksm_madvise(vma, vma->vm_start, vma->vm_end, - MADV_UNMERGEABLE, &vma->vm_flags); - mmap_write_downgrade(kvm->mm); - *downgrade = true; - if (ret) - return ret; - ret = migrate_vma_setup(&mig); if (ret) return ret; @@ -503,7 +561,6 @@ unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa, unsigned long flags, unsigned long page_shift) { - bool downgrade = false; unsigned long start, end; struct vm_area_struct *vma; int srcu_idx; @@ -524,7 +581,7 @@ unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa, ret = H_PARAMETER; srcu_idx = srcu_read_lock(&kvm->srcu); - mmap_write_lock(kvm->mm); + mmap_read_lock(kvm->mm); start = gfn_to_hva(kvm, gfn); if (kvm_is_error_hva(start)) @@ -540,16 +597,12 @@ unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa, if (!vma || vma->vm_start > start || vma->vm_end < end) goto out_unlock; - if (!kvmppc_svm_page_in(vma, start, end, gpa, kvm, page_shift, - &downgrade)) + if (!kvmppc_svm_page_in(vma, start, end, gpa, kvm, page_shift)) ret = H_SUCCESS; out_unlock: mutex_unlock(&kvm->arch.uvmem_lock); out: - if (downgrade) - mmap_read_unlock(kvm->mm); - else - mmap_write_unlock(kvm->mm); + mmap_read_unlock(kvm->mm); srcu_read_unlock(&kvm->srcu, srcu_idx); return ret; } From patchwork Mon Jul 27 18:07:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ram Pai X-Patchwork-Id: 1337132 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BFnzB1Y2Vz9sSn for ; Tue, 28 Jul 2020 04:15:18 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=us.ibm.com Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BFnzB07hXzDsNW for ; Tue, 28 Jul 2020 04:15:18 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=us.ibm.com (client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com; envelope-from=linuxram@us.ibm.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=us.ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BFnpZ1qJNzF0Ys for ; Tue, 28 Jul 2020 04:07:50 +1000 (AEST) Received: from pps.filterd (m0098410.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06RI2NHL035334; Mon, 27 Jul 2020 14:07:45 -0400 Received: from ppma06ams.nl.ibm.com (66.31.33a9.ip4.static.sl-reverse.com [169.51.49.102]) by mx0a-001b2d01.pphosted.com with ESMTP id 32j213bjyf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 27 Jul 2020 14:07:44 -0400 Received: from pps.filterd (ppma06ams.nl.ibm.com [127.0.0.1]) by ppma06ams.nl.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 06RI2LTw010941; Mon, 27 Jul 2020 18:07:42 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma06ams.nl.ibm.com with ESMTP id 32gcqgjh9x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 27 Jul 2020 18:07:42 +0000 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 06RI7dSx60162288 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 27 Jul 2020 18:07:39 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A3003A4055; Mon, 27 Jul 2020 18:07:39 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 83CA0A4051; Mon, 27 Jul 2020 18:07:36 +0000 (GMT) Received: from oc0525413822.ibm.com (unknown [9.163.69.7]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTP; Mon, 27 Jul 2020 18:07:36 +0000 (GMT) From: Ram Pai To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 3/5] KVM: PPC: Book3S HV: track the state GFNs associated with secure VMs Date: Mon, 27 Jul 2020 11:07:16 -0700 Message-Id: <1595873238-26184-4-git-send-email-linuxram@us.ibm.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1595873238-26184-1-git-send-email-linuxram@us.ibm.com> References: <1595873238-26184-1-git-send-email-linuxram@us.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-07-27_12:2020-07-27, 2020-07-27 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 adultscore=0 clxscore=1015 priorityscore=1501 phishscore=0 bulkscore=0 lowpriorityscore=0 spamscore=0 mlxlogscore=999 malwarescore=0 mlxscore=0 suspectscore=2 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2007270118 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: ldufour@linux.ibm.com, linuxram@us.ibm.com, cclaudio@linux.ibm.com, bharata@linux.ibm.com, sathnaga@linux.vnet.ibm.com, aneesh.kumar@linux.ibm.com, sukadev@linux.vnet.ibm.com, bauerman@linux.ibm.com, david@gibson.dropbear.id.au Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" During the life of SVM, its GFNs transition through normal, secure and shared states. Since the kernel does not track GFNs that are shared, it is not possible to disambiguate a shared GFN from a GFN whose PFN has not yet been migrated to a secure-PFN. Also it is not possible to disambiguate a secure-GFN from a GFN whose GFN has been pagedout from the ultravisor. The ability to identify the state of a GFN is needed to skip migration of its PFN to secure-PFN during ESM transition. The code is re-organized to track the states of a GFN as explained below. ************************************************************************ 1. States of a GFN --------------- The GFN can be in one of the following states. (a) Secure - The GFN is secure. The GFN is associated with a Secure VM, the contents of the GFN is not accessible to the Hypervisor. This GFN can be backed by a secure-PFN, or can be backed by a normal-PFN with contents encrypted. The former is true when the GFN is paged-in into the ultravisor. The latter is true when the GFN is paged-out of the ultravisor. (b) Shared - The GFN is shared. The GFN is associated with a a secure VM. The contents of the GFN is accessible to Hypervisor. This GFN is backed by a normal-PFN and its content is un-encrypted. (c) Normal - The GFN is a normal. The GFN is associated with a normal VM. The contents of the GFN is accesible to the Hypervisor. Its content is never encrypted. 2. States of a VM. --------------- (a) Normal VM: A VM whose contents are always accessible to the hypervisor. All its GFNs are normal-GFNs. (b) Secure VM: A VM whose contents are not accessible to the hypervisor without the VM's consent. Its GFNs are either Shared-GFN or Secure-GFNs. (c) Transient VM: A Normal VM that is transitioning to secure VM. The transition starts on successful return of H_SVM_INIT_START, and ends on successful return of H_SVM_INIT_DONE. This transient VM, can have GFNs in any of the three states; i.e Secure-GFN, Shared-GFN, and Normal-GFN. The VM never executes in this state in supervisor-mode. 3. Memory slot State. ------------------ The state of a memory slot mirrors the state of the VM the memory slot is associated with. 4. VM State transition. -------------------- A VM always starts in Normal Mode. H_SVM_INIT_START moves the VM into transient state. During this time the Ultravisor may request some of its GFNs to be shared or secured. So its GFNs can be in one of the three GFN states. H_SVM_INIT_DONE moves the VM entirely from transient state to secure-state. At this point any left-over normal-GFNs are transitioned to Secure-GFN. H_SVM_INIT_ABORT moves the transient VM back to normal VM. All its GFNs are moved to Normal-GFNs. UV_TERMINATE transitions the secure-VM back to normal-VM. All the secure-GFN and shared-GFNs are tranistioned to normal-GFN Note: The contents of the normal-GFN is undefined at this point. 5. GFN state implementation: ------------------------- Secure GFN is associated with a secure-PFN; also called uvmem_pfn, when the GFN is paged-in. Its pfn[] has KVMPPC_GFN_UVMEM_PFN flag set, and contains the value of the secure-PFN. It is associated with a normal-PFN; also called mem_pfn, when the GFN is pagedout. Its pfn[] has KVMPPC_GFN_MEM_PFN flag set. The value of the normal-PFN is not tracked. Shared GFN is associated with a normal-PFN. Its pfn[] has KVMPPC_UVMEM_SHARED_PFN flag set. The value of the normal-PFN is not tracked. Normal GFN is associated with normal-PFN. Its pfn[] has no flag set. The value of the normal-PFN is not tracked. 6. Life cycle of a GFN -------------------- -------------------------------------------------------------- | | Share | Unshare | SVM |H_SVM_INIT_DONE| | |operation |operation | abort/ | | | | | | terminate | | ------------------------------------------------------------- | | | | | | | Secure | Shared | Secure |Normal |Secure | | | | | | | | Shared | Shared | Secure |Normal |Shared | | | | | | | | Normal | Shared | Secure |Normal |Secure | -------------------------------------------------------------- 7. Life cycle of a VM -------------------- -------------------------------------------------------------------- | | start | H_SVM_ |H_SVM_ |H_SVM_ |UV_SVM_ | | | VM |INIT_START|INIT_DONE|INIT_ABORT |TERMINATE | | | | | | | | --------- ---------------------------------------------------------- | | | | | | | | Normal | Normal | Transient|Error |Error |Normal | | | | | | | | | Secure | Error | Error |Error |Error |Normal | | | | | | | | |Transient| N/A | Error |Secure |Normal |Normal | -------------------------------------------------------------------- ************************************************************************ Cc: Paul Mackerras Cc: Benjamin Herrenschmidt Cc: Michael Ellerman Cc: Bharata B Rao Cc: Aneesh Kumar K.V Cc: Sukadev Bhattiprolu Cc: Laurent Dufour Cc: Thiago Jung Bauermann Cc: David Gibson Cc: Claudio Carvalho Cc: kvm-ppc@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Reviewed-by: Bharata B Rao Reviewed-by: Thiago Jung Bauermann Signed-off-by: Ram Pai --- arch/powerpc/kvm/book3s_hv_uvmem.c | 191 +++++++++++++++++++++++++++++++++---- 1 file changed, 172 insertions(+), 19 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c index 533b608..1b2b029 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -98,7 +98,127 @@ static unsigned long *kvmppc_uvmem_bitmap; static DEFINE_SPINLOCK(kvmppc_uvmem_bitmap_lock); -#define KVMPPC_UVMEM_PFN (1UL << 63) +/* + * States of a GFN + * --------------- + * The GFN can be in one of the following states. + * + * (a) Secure - The GFN is secure. The GFN is associated with + * a Secure VM, the contents of the GFN is not accessible + * to the Hypervisor. This GFN can be backed by a secure-PFN, + * or can be backed by a normal-PFN with contents encrypted. + * The former is true when the GFN is paged-in into the + * ultravisor. The latter is true when the GFN is paged-out + * of the ultravisor. + * + * (b) Shared - The GFN is shared. The GFN is associated with a + * a secure VM. The contents of the GFN is accessible to + * Hypervisor. This GFN is backed by a normal-PFN and its + * content is un-encrypted. + * + * (c) Normal - The GFN is a normal. The GFN is associated with + * a normal VM. The contents of the GFN is accesible to + * the Hypervisor. Its content is never encrypted. + * + * States of a VM. + * --------------- + * + * Normal VM: A VM whose contents are always accessible to + * the hypervisor. All its GFNs are normal-GFNs. + * + * Secure VM: A VM whose contents are not accessible to the + * hypervisor without the VM's consent. Its GFNs are + * either Shared-GFN or Secure-GFNs. + * + * Transient VM: A Normal VM that is transitioning to secure VM. + * The transition starts on successful return of + * H_SVM_INIT_START, and ends on successful return + * of H_SVM_INIT_DONE. This transient VM, can have GFNs + * in any of the three states; i.e Secure-GFN, Shared-GFN, + * and Normal-GFN. The VM never executes in this state + * in supervisor-mode. + * + * Memory slot State. + * ----------------------------- + * The state of a memory slot mirrors the state of the + * VM the memory slot is associated with. + * + * VM State transition. + * -------------------- + * + * A VM always starts in Normal Mode. + * + * H_SVM_INIT_START moves the VM into transient state. During this + * time the Ultravisor may request some of its GFNs to be shared or + * secured. So its GFNs can be in one of the three GFN states. + * + * H_SVM_INIT_DONE moves the VM entirely from transient state to + * secure-state. At this point any left-over normal-GFNs are + * transitioned to Secure-GFN. + * + * H_SVM_INIT_ABORT moves the transient VM back to normal VM. + * All its GFNs are moved to Normal-GFNs. + * + * UV_TERMINATE transitions the secure-VM back to normal-VM. All + * the secure-GFN and shared-GFNs are tranistioned to normal-GFN + * Note: The contents of the normal-GFN is undefined at this point. + * + * GFN state implementation: + * ------------------------- + * + * Secure GFN is associated with a secure-PFN; also called uvmem_pfn, + * when the GFN is paged-in. Its pfn[] has KVMPPC_GFN_UVMEM_PFN flag + * set, and contains the value of the secure-PFN. + * It is associated with a normal-PFN; also called mem_pfn, when + * the GFN is pagedout. Its pfn[] has KVMPPC_GFN_MEM_PFN flag set. + * The value of the normal-PFN is not tracked. + * + * Shared GFN is associated with a normal-PFN. Its pfn[] has + * KVMPPC_UVMEM_SHARED_PFN flag set. The value of the normal-PFN + * is not tracked. + * + * Normal GFN is associated with normal-PFN. Its pfn[] has + * no flag set. The value of the normal-PFN is not tracked. + * + * Life cycle of a GFN + * -------------------- + * + * -------------------------------------------------------------- + * | | Share | Unshare | SVM |H_SVM_INIT_DONE| + * | |operation |operation | abort/ | | + * | | | | terminate | | + * ------------------------------------------------------------- + * | | | | | | + * | Secure | Shared | Secure |Normal |Secure | + * | | | | | | + * | Shared | Shared | Secure |Normal |Shared | + * | | | | | | + * | Normal | Shared | Secure |Normal |Secure | + * -------------------------------------------------------------- + * + * Life cycle of a VM + * -------------------- + * + * -------------------------------------------------------------------- + * | | start | H_SVM_ |H_SVM_ |H_SVM_ |UV_SVM_ | + * | | VM |INIT_START|INIT_DONE|INIT_ABORT |TERMINATE | + * | | | | | | | + * --------- ---------------------------------------------------------- + * | | | | | | | + * | Normal | Normal | Transient|Error |Error |Normal | + * | | | | | | | + * | Secure | Error | Error |Error |Error |Normal | + * | | | | | | | + * |Transient| N/A | Error |Secure |Normal |Normal | + * -------------------------------------------------------------------- + */ + +#define KVMPPC_GFN_UVMEM_PFN (1UL << 63) +#define KVMPPC_GFN_MEM_PFN (1UL << 62) +#define KVMPPC_GFN_SHARED (1UL << 61) +#define KVMPPC_GFN_SECURE (KVMPPC_GFN_UVMEM_PFN | KVMPPC_GFN_MEM_PFN) +#define KVMPPC_GFN_FLAG_MASK (KVMPPC_GFN_SECURE | KVMPPC_GFN_SHARED) +#define KVMPPC_GFN_PFN_MASK (~KVMPPC_GFN_FLAG_MASK) struct kvmppc_uvmem_slot { struct list_head list; @@ -106,11 +226,11 @@ struct kvmppc_uvmem_slot { unsigned long base_pfn; unsigned long *pfns; }; - struct kvmppc_uvmem_page_pvt { struct kvm *kvm; unsigned long gpa; bool skip_page_out; + bool remove_gfn; }; bool kvmppc_uvmem_available(void) @@ -163,8 +283,8 @@ void kvmppc_uvmem_slot_free(struct kvm *kvm, const struct kvm_memory_slot *slot) mutex_unlock(&kvm->arch.uvmem_lock); } -static void kvmppc_uvmem_pfn_insert(unsigned long gfn, unsigned long uvmem_pfn, - struct kvm *kvm) +static void kvmppc_mark_gfn(unsigned long gfn, struct kvm *kvm, + unsigned long flag, unsigned long uvmem_pfn) { struct kvmppc_uvmem_slot *p; @@ -172,24 +292,41 @@ static void kvmppc_uvmem_pfn_insert(unsigned long gfn, unsigned long uvmem_pfn, if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) { unsigned long index = gfn - p->base_pfn; - p->pfns[index] = uvmem_pfn | KVMPPC_UVMEM_PFN; + if (flag == KVMPPC_GFN_UVMEM_PFN) + p->pfns[index] = uvmem_pfn | flag; + else + p->pfns[index] = flag; return; } } } -static void kvmppc_uvmem_pfn_remove(unsigned long gfn, struct kvm *kvm) +/* mark the GFN as secure-GFN associated with @uvmem pfn device-PFN. */ +static void kvmppc_gfn_secure_uvmem_pfn(unsigned long gfn, + unsigned long uvmem_pfn, struct kvm *kvm) { - struct kvmppc_uvmem_slot *p; + kvmppc_mark_gfn(gfn, kvm, KVMPPC_GFN_UVMEM_PFN, uvmem_pfn); +} - list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) { - if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) { - p->pfns[gfn - p->base_pfn] = 0; - return; - } - } +/* mark the GFN as secure-GFN associated with a memory-PFN. */ +static void kvmppc_gfn_secure_mem_pfn(unsigned long gfn, struct kvm *kvm) +{ + kvmppc_mark_gfn(gfn, kvm, KVMPPC_GFN_MEM_PFN, 0); } +/* mark the GFN as a shared GFN. */ +static void kvmppc_gfn_shared(unsigned long gfn, struct kvm *kvm) +{ + kvmppc_mark_gfn(gfn, kvm, KVMPPC_GFN_SHARED, 0); +} + +/* mark the GFN as a non-existent GFN. */ +static void kvmppc_gfn_remove(unsigned long gfn, struct kvm *kvm) +{ + kvmppc_mark_gfn(gfn, kvm, 0, 0); +} + +/* return true, if the GFN is a secure-GFN backed by a secure-PFN */ static bool kvmppc_gfn_is_uvmem_pfn(unsigned long gfn, struct kvm *kvm, unsigned long *uvmem_pfn) { @@ -199,10 +336,10 @@ static bool kvmppc_gfn_is_uvmem_pfn(unsigned long gfn, struct kvm *kvm, if (gfn >= p->base_pfn && gfn < p->base_pfn + p->nr_pfns) { unsigned long index = gfn - p->base_pfn; - if (p->pfns[index] & KVMPPC_UVMEM_PFN) { + if (p->pfns[index] & KVMPPC_GFN_UVMEM_PFN) { if (uvmem_pfn) *uvmem_pfn = p->pfns[index] & - ~KVMPPC_UVMEM_PFN; + KVMPPC_GFN_PFN_MASK; return true; } else return false; @@ -354,6 +491,7 @@ void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free, mutex_lock(&kvm->arch.uvmem_lock); if (!kvmppc_gfn_is_uvmem_pfn(gfn, kvm, &uvmem_pfn)) { + kvmppc_gfn_remove(gfn, kvm); mutex_unlock(&kvm->arch.uvmem_lock); continue; } @@ -361,6 +499,7 @@ void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free, uvmem_page = pfn_to_page(uvmem_pfn); pvt = uvmem_page->zone_device_data; pvt->skip_page_out = skip_page_out; + pvt->remove_gfn = true; mutex_unlock(&kvm->arch.uvmem_lock); pfn = gfn_to_pfn(kvm, gfn); @@ -430,7 +569,7 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm) goto out_clear; uvmem_pfn = bit + pfn_first; - kvmppc_uvmem_pfn_insert(gpa >> PAGE_SHIFT, uvmem_pfn, kvm); + kvmppc_gfn_secure_uvmem_pfn(gpa >> PAGE_SHIFT, uvmem_pfn, kvm); pvt->gpa = gpa; pvt->kvm = kvm; @@ -525,6 +664,11 @@ static unsigned long kvmppc_share_page(struct kvm *kvm, unsigned long gpa, uvmem_page = pfn_to_page(uvmem_pfn); pvt = uvmem_page->zone_device_data; pvt->skip_page_out = true; + /* + * do not drop the GFN. It is a valid GFN + * that is transitioned to a shared GFN. + */ + pvt->remove_gfn = false; } retry: @@ -538,12 +682,16 @@ static unsigned long kvmppc_share_page(struct kvm *kvm, unsigned long gpa, uvmem_page = pfn_to_page(uvmem_pfn); pvt = uvmem_page->zone_device_data; pvt->skip_page_out = true; + pvt->remove_gfn = false; /* it continues to be a valid GFN */ kvm_release_pfn_clean(pfn); goto retry; } - if (!uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0, page_shift)) + if (!uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0, + page_shift)) { + kvmppc_gfn_shared(gfn, kvm); ret = H_SUCCESS; + } kvm_release_pfn_clean(pfn); mutex_unlock(&kvm->arch.uvmem_lock); out: @@ -599,6 +747,7 @@ unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa, if (!kvmppc_svm_page_in(vma, start, end, gpa, kvm, page_shift)) ret = H_SUCCESS; + out_unlock: mutex_unlock(&kvm->arch.uvmem_lock); out: @@ -707,7 +856,8 @@ static vm_fault_t kvmppc_uvmem_migrate_to_ram(struct vm_fault *vmf) /* * Release the device PFN back to the pool * - * Gets called when secure page becomes a normal page during H_SVM_PAGE_OUT. + * Gets called when secure GFN tranistions from a secure-PFN + * to a normal PFN during H_SVM_PAGE_OUT. * Gets called with kvm->arch.uvmem_lock held. */ static void kvmppc_uvmem_page_free(struct page *page) @@ -722,7 +872,10 @@ static void kvmppc_uvmem_page_free(struct page *page) pvt = page->zone_device_data; page->zone_device_data = NULL; - kvmppc_uvmem_pfn_remove(pvt->gpa >> PAGE_SHIFT, pvt->kvm); + if (pvt->remove_gfn) + kvmppc_gfn_remove(pvt->gpa >> PAGE_SHIFT, pvt->kvm); + else + kvmppc_gfn_secure_mem_pfn(pvt->gpa >> PAGE_SHIFT, pvt->kvm); kfree(pvt); } From patchwork Mon Jul 27 18:07:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ram Pai X-Patchwork-Id: 1337135 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BFp445mpPz9sR4 for ; Tue, 28 Jul 2020 04:19:32 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=us.ibm.com Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BFp444wtrzDqfB for ; Tue, 28 Jul 2020 04:19:32 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=us.ibm.com (client-ip=148.163.158.5; helo=mx0b-001b2d01.pphosted.com; envelope-from=linuxram@us.ibm.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=us.ibm.com Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BFnpg0C5vzF0m0 for ; Tue, 28 Jul 2020 04:07:54 +1000 (AEST) Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06RI0ngC126585; Mon, 27 Jul 2020 14:07:48 -0400 Received: from ppma06ams.nl.ibm.com (66.31.33a9.ip4.static.sl-reverse.com [169.51.49.102]) by mx0a-001b2d01.pphosted.com with ESMTP id 32j09vfd21-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 27 Jul 2020 14:07:47 -0400 Received: from pps.filterd (ppma06ams.nl.ibm.com [127.0.0.1]) by ppma06ams.nl.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 06RI2DWN010064; Mon, 27 Jul 2020 18:07:46 GMT Received: from b06cxnps3075.portsmouth.uk.ibm.com (d06relay10.portsmouth.uk.ibm.com [9.149.109.195]) by ppma06ams.nl.ibm.com with ESMTP id 32gcqgjha0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 27 Jul 2020 18:07:46 +0000 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 06RI7hA233161660 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 27 Jul 2020 18:07:43 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 2FC39A4055; Mon, 27 Jul 2020 18:07:43 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 15E50A4051; Mon, 27 Jul 2020 18:07:40 +0000 (GMT) Received: from oc0525413822.ibm.com (unknown [9.163.69.7]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTP; Mon, 27 Jul 2020 18:07:39 +0000 (GMT) From: Ram Pai To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 4/5] KVM: PPC: Book3S HV: in H_SVM_INIT_DONE, migrate remaining normal-GFNs to secure-GFNs. Date: Mon, 27 Jul 2020 11:07:17 -0700 Message-Id: <1595873238-26184-5-git-send-email-linuxram@us.ibm.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1595873238-26184-1-git-send-email-linuxram@us.ibm.com> References: <1595873238-26184-1-git-send-email-linuxram@us.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-07-27_12:2020-07-27, 2020-07-27 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 impostorscore=0 mlxscore=0 lowpriorityscore=0 adultscore=0 phishscore=0 spamscore=0 bulkscore=0 malwarescore=0 mlxlogscore=999 suspectscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2007270118 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: ldufour@linux.ibm.com, linuxram@us.ibm.com, cclaudio@linux.ibm.com, bharata@linux.ibm.com, sathnaga@linux.vnet.ibm.com, aneesh.kumar@linux.ibm.com, sukadev@linux.vnet.ibm.com, bauerman@linux.ibm.com, david@gibson.dropbear.id.au Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" The Ultravisor is expected to explicitly call H_SVM_PAGE_IN for all the pages of the SVM before calling H_SVM_INIT_DONE. This causes a huge delay in tranistioning the VM to SVM. The Ultravisor is only interested in the pages that contain the kernel, initrd and other important data structures. The rest contain throw-away content. However if not all pages are requested by the Ultravisor, the Hypervisor continues to consider the GFNs corresponding to the non-requested pages as normal GFNs. This can lead to data-corruption and undefined behavior. In H_SVM_INIT_DONE handler, move all the PFNs associated with the SVM's GFNs to secure-PFNs. Skip the GFNs that are already Paged-in or Shared or Paged-in followed by a Paged-out. Cc: Paul Mackerras Cc: Benjamin Herrenschmidt Cc: Michael Ellerman Cc: Bharata B Rao Cc: Aneesh Kumar K.V Cc: Sukadev Bhattiprolu Cc: Laurent Dufour Cc: Thiago Jung Bauermann Cc: David Gibson Cc: Claudio Carvalho Cc: kvm-ppc@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Reviewed-by: Bharata B Rao Signed-off-by: Ram Pai --- Documentation/powerpc/ultravisor.rst | 2 + arch/powerpc/kvm/book3s_hv_uvmem.c | 154 ++++++++++++++++++++++++++++++----- 2 files changed, 134 insertions(+), 22 deletions(-) diff --git a/Documentation/powerpc/ultravisor.rst b/Documentation/powerpc/ultravisor.rst index a1c8c37..ba6b1bf 100644 --- a/Documentation/powerpc/ultravisor.rst +++ b/Documentation/powerpc/ultravisor.rst @@ -934,6 +934,8 @@ Return values * H_UNSUPPORTED if called from the wrong context (e.g. from an SVM or before an H_SVM_INIT_START hypercall). + * H_STATE if the hypervisor could not successfully + transition the VM to Secure VM. Description ~~~~~~~~~~~ diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c index 1b2b029..a1664ae 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -93,6 +93,7 @@ #include #include #include +#include static struct dev_pagemap kvmppc_uvmem_pgmap; static unsigned long *kvmppc_uvmem_bitmap; @@ -348,6 +349,41 @@ static bool kvmppc_gfn_is_uvmem_pfn(unsigned long gfn, struct kvm *kvm, return false; } +/* + * starting from *gfn search for the next available GFN that is not yet + * transitioned to a secure GFN. return the value of that GFN in *gfn. If a + * GFN is found, return true, else return false + * + * Must be called with kvm->arch.uvmem_lock held. + */ +static bool kvmppc_next_nontransitioned_gfn(const struct kvm_memory_slot *memslot, + struct kvm *kvm, unsigned long *gfn) +{ + struct kvmppc_uvmem_slot *p; + bool ret = false; + unsigned long i; + + list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) + if (*gfn >= p->base_pfn && *gfn < p->base_pfn + p->nr_pfns) + break; + if (!p) + return ret; + /* + * The code below assumes, one to one correspondence between + * kvmppc_uvmem_slot and memslot. + */ + for (i = *gfn; i < p->base_pfn + p->nr_pfns; i++) { + unsigned long index = i - p->base_pfn; + + if (!(p->pfns[index] & KVMPPC_GFN_FLAG_MASK)) { + *gfn = i; + ret = true; + break; + } + } + return ret; +} + static int kvmppc_memslot_page_merge(struct kvm *kvm, const struct kvm_memory_slot *memslot, bool merge) { @@ -460,16 +496,6 @@ unsigned long kvmppc_h_svm_init_start(struct kvm *kvm) return ret; } -unsigned long kvmppc_h_svm_init_done(struct kvm *kvm) -{ - if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START)) - return H_UNSUPPORTED; - - kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_DONE; - pr_info("LPID %d went secure\n", kvm->arch.lpid); - return H_SUCCESS; -} - /* * Drop device pages that we maintain for the secure guest * @@ -588,12 +614,14 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm) } /* - * Alloc a PFN from private device memory pool and copy page from normal - * memory to secure memory using UV_PAGE_IN uvcall. + * Alloc a PFN from private device memory pool. If @pagein is true, + * copy page from normal memory to secure memory using UV_PAGE_IN uvcall. */ -static int kvmppc_svm_page_in(struct vm_area_struct *vma, unsigned long start, - unsigned long end, unsigned long gpa, struct kvm *kvm, - unsigned long page_shift) +static int kvmppc_svm_page_in(struct vm_area_struct *vma, + unsigned long start, + unsigned long end, unsigned long gpa, struct kvm *kvm, + unsigned long page_shift, + bool pagein) { unsigned long src_pfn, dst_pfn = 0; struct migrate_vma mig; @@ -624,11 +652,16 @@ static int kvmppc_svm_page_in(struct vm_area_struct *vma, unsigned long start, goto out_finalize; } - pfn = *mig.src >> MIGRATE_PFN_SHIFT; - spage = migrate_pfn_to_page(*mig.src); - if (spage) - uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0, - page_shift); + if (pagein) { + pfn = *mig.src >> MIGRATE_PFN_SHIFT; + spage = migrate_pfn_to_page(*mig.src); + if (spage) { + ret = uv_page_in(kvm->arch.lpid, pfn << page_shift, + gpa, 0, page_shift); + if (ret) + goto out_finalize; + } + } *mig.dst = migrate_pfn(page_to_pfn(dpage)) | MIGRATE_PFN_LOCKED; migrate_vma_pages(&mig); @@ -637,6 +670,80 @@ static int kvmppc_svm_page_in(struct vm_area_struct *vma, unsigned long start, return ret; } +static int kvmppc_uv_migrate_mem_slot(struct kvm *kvm, + const struct kvm_memory_slot *memslot) +{ + unsigned long gfn = memslot->base_gfn; + struct vm_area_struct *vma; + unsigned long start, end; + int ret = 0; + + mmap_read_lock(kvm->mm); + mutex_lock(&kvm->arch.uvmem_lock); + while (kvmppc_next_nontransitioned_gfn(memslot, kvm, &gfn)) { + ret = H_STATE; + start = gfn_to_hva(kvm, gfn); + if (kvm_is_error_hva(start)) + break; + + end = start + (1UL << PAGE_SHIFT); + vma = find_vma_intersection(kvm->mm, start, end); + if (!vma || vma->vm_start > start || vma->vm_end < end) + break; + + ret = kvmppc_svm_page_in(vma, start, end, + (gfn << PAGE_SHIFT), kvm, PAGE_SHIFT, false); + if (ret) { + ret = H_STATE; + break; + } + + /* relinquish the cpu if needed */ + cond_resched(); + } + mutex_unlock(&kvm->arch.uvmem_lock); + mmap_read_unlock(kvm->mm); + return ret; +} + +unsigned long kvmppc_h_svm_init_done(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + int srcu_idx; + long ret = H_SUCCESS; + + if (!(kvm->arch.secure_guest & KVMPPC_SECURE_INIT_START)) + return H_UNSUPPORTED; + + /* migrate any unmoved normal pfn to device pfns*/ + srcu_idx = srcu_read_lock(&kvm->srcu); + slots = kvm_memslots(kvm); + kvm_for_each_memslot(memslot, slots) { + ret = kvmppc_uv_migrate_mem_slot(kvm, memslot); + if (ret) { + /* + * The pages will remain transitioned. + * Its the callers responsibility to + * terminate the VM, which will undo + * all state of the VM. Till then + * this VM is in a erroneous state. + * Its KVMPPC_SECURE_INIT_DONE will + * remain unset. + */ + ret = H_STATE; + goto out; + } + } + + kvm->arch.secure_guest |= KVMPPC_SECURE_INIT_DONE; + pr_info("LPID %d went secure\n", kvm->arch.lpid); + +out: + srcu_read_unlock(&kvm->srcu, srcu_idx); + return ret; +} + /* * Shares the page with HV, thus making it a normal page. * @@ -745,8 +852,11 @@ unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa, if (!vma || vma->vm_start > start || vma->vm_end < end) goto out_unlock; - if (!kvmppc_svm_page_in(vma, start, end, gpa, kvm, page_shift)) - ret = H_SUCCESS; + if (kvmppc_svm_page_in(vma, start, end, gpa, kvm, page_shift, + true)) + goto out_unlock; + + ret = H_SUCCESS; out_unlock: mutex_unlock(&kvm->arch.uvmem_lock); From patchwork Mon Jul 27 18:07:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ram Pai X-Patchwork-Id: 1337136 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BFp6X4bbZz9sR4 for ; Tue, 28 Jul 2020 04:21:40 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=us.ibm.com Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BFp6X2wCszF1cv for ; Tue, 28 Jul 2020 04:21:40 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=us.ibm.com (client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=linuxram@us.ibm.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=pass (p=none dis=none) header.from=us.ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BFnpj6Fb2zF0m0 for ; Tue, 28 Jul 2020 04:07:57 +1000 (AEST) Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06RI1euN123748; Mon, 27 Jul 2020 14:07:51 -0400 Received: from ppma06ams.nl.ibm.com (66.31.33a9.ip4.static.sl-reverse.com [169.51.49.102]) by mx0b-001b2d01.pphosted.com with ESMTP id 32hs0scc75-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 27 Jul 2020 14:07:51 -0400 Received: from pps.filterd (ppma06ams.nl.ibm.com [127.0.0.1]) by ppma06ams.nl.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 06RI2CQR010060; Mon, 27 Jul 2020 18:07:49 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma06ams.nl.ibm.com with ESMTP id 32gcqgjha2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 27 Jul 2020 18:07:49 +0000 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 06RI7kEI59637834 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 27 Jul 2020 18:07:46 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B69E6A4051; Mon, 27 Jul 2020 18:07:46 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 97DF6A4040; Mon, 27 Jul 2020 18:07:43 +0000 (GMT) Received: from oc0525413822.ibm.com (unknown [9.163.69.7]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTP; Mon, 27 Jul 2020 18:07:43 +0000 (GMT) From: Ram Pai To: kvm-ppc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Subject: [PATCH v6 5/5] KVM: PPC: Book3S HV: migrate hot plugged memory Date: Mon, 27 Jul 2020 11:07:18 -0700 Message-Id: <1595873238-26184-6-git-send-email-linuxram@us.ibm.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1595873238-26184-1-git-send-email-linuxram@us.ibm.com> References: <1595873238-26184-1-git-send-email-linuxram@us.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-07-27_12:2020-07-27, 2020-07-27 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 adultscore=0 bulkscore=0 lowpriorityscore=0 phishscore=0 mlxlogscore=999 spamscore=0 mlxscore=0 priorityscore=1501 impostorscore=0 suspectscore=2 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2007270118 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: ldufour@linux.ibm.com, linuxram@us.ibm.com, cclaudio@linux.ibm.com, bharata@linux.ibm.com, sathnaga@linux.vnet.ibm.com, aneesh.kumar@linux.ibm.com, sukadev@linux.vnet.ibm.com, bauerman@linux.ibm.com, david@gibson.dropbear.id.au Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Laurent Dufour When a memory slot is hot plugged to a SVM, PFNs associated with the GFNs in that slot must be migrated to the secure-PFNs, aka device-PFNs. Call kvmppc_uv_migrate_mem_slot() to accomplish this. Disable page-merge for all pages in the memory slot. Reviewed-by: Bharata B Rao Signed-off-by: Ram Pai [rearranged the code, and modified the commit log] Signed-off-by: Laurent Dufour --- arch/powerpc/include/asm/kvm_book3s_uvmem.h | 14 ++++++++++++++ arch/powerpc/kvm/book3s_hv.c | 14 ++++++-------- arch/powerpc/kvm/book3s_hv_uvmem.c | 23 +++++++++++++++++++---- 3 files changed, 39 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s_uvmem.h b/arch/powerpc/include/asm/kvm_book3s_uvmem.h index 9cb7d8b..0a63194 100644 --- a/arch/powerpc/include/asm/kvm_book3s_uvmem.h +++ b/arch/powerpc/include/asm/kvm_book3s_uvmem.h @@ -23,6 +23,10 @@ unsigned long kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long kvmppc_h_svm_init_abort(struct kvm *kvm); void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free, struct kvm *kvm, bool skip_page_out); +int kvmppc_uvmem_memslot_create(struct kvm *kvm, + const struct kvm_memory_slot *new); +void kvmppc_uvmem_memslot_delete(struct kvm *kvm, + const struct kvm_memory_slot *old); #else static inline int kvmppc_uvmem_init(void) { @@ -82,5 +86,15 @@ static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn) static inline void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *free, struct kvm *kvm, bool skip_page_out) { } + +static inline int kvmppc_uvmem_memslot_create(struct kvm *kvm, + const struct kvm_memory_slot *new) +{ + return H_UNSUPPORTED; +} + +static inline void kvmppc_uvmem_memslot_delete(struct kvm *kvm, + const struct kvm_memory_slot *old) { } + #endif /* CONFIG_PPC_UV */ #endif /* __ASM_KVM_BOOK3S_UVMEM_H__ */ diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index d331b46..a93bc65 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4515,16 +4515,14 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm, switch (change) { case KVM_MR_CREATE: - if (kvmppc_uvmem_slot_init(kvm, new)) - return; - uv_register_mem_slot(kvm->arch.lpid, - new->base_gfn << PAGE_SHIFT, - new->npages * PAGE_SIZE, - 0, new->id); + /* + * @TODO kvmppc_uvmem_memslot_create() can fail and + * return error. Fix this. + */ + kvmppc_uvmem_memslot_create(kvm, new); break; case KVM_MR_DELETE: - uv_unregister_mem_slot(kvm->arch.lpid, old->id); - kvmppc_uvmem_slot_free(kvm, old); + kvmppc_uvmem_memslot_delete(kvm, old); break; default: /* TODO: Handle KVM_MR_MOVE */ diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c index a1664ae..5b917ea 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -418,7 +418,7 @@ static int kvmppc_memslot_page_merge(struct kvm *kvm, return ret; } -static void kvmppc_uvmem_memslot_delete(struct kvm *kvm, +static void __kvmppc_uvmem_memslot_delete(struct kvm *kvm, const struct kvm_memory_slot *memslot) { uv_unregister_mem_slot(kvm->arch.lpid, memslot->id); @@ -426,7 +426,7 @@ static void kvmppc_uvmem_memslot_delete(struct kvm *kvm, kvmppc_memslot_page_merge(kvm, memslot, true); } -static int kvmppc_uvmem_memslot_create(struct kvm *kvm, +static int __kvmppc_uvmem_memslot_create(struct kvm *kvm, const struct kvm_memory_slot *memslot) { int ret = H_PARAMETER; @@ -478,7 +478,7 @@ unsigned long kvmppc_h_svm_init_start(struct kvm *kvm) /* register the memslot */ slots = kvm_memslots(kvm); kvm_for_each_memslot(memslot, slots) { - ret = kvmppc_uvmem_memslot_create(kvm, memslot); + ret = __kvmppc_uvmem_memslot_create(kvm, memslot); if (ret) break; } @@ -488,7 +488,7 @@ unsigned long kvmppc_h_svm_init_start(struct kvm *kvm) kvm_for_each_memslot(m, slots) { if (m == memslot) break; - kvmppc_uvmem_memslot_delete(kvm, memslot); + __kvmppc_uvmem_memslot_delete(kvm, memslot); } } @@ -1057,6 +1057,21 @@ int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn) return (ret == U_SUCCESS) ? RESUME_GUEST : -EFAULT; } +int kvmppc_uvmem_memslot_create(struct kvm *kvm, const struct kvm_memory_slot *new) +{ + int ret = __kvmppc_uvmem_memslot_create(kvm, new); + + if (!ret) + ret = kvmppc_uv_migrate_mem_slot(kvm, new); + + return ret; +} + +void kvmppc_uvmem_memslot_delete(struct kvm *kvm, const struct kvm_memory_slot *old) +{ + __kvmppc_uvmem_memslot_delete(kvm, old); +} + static u64 kvmppc_get_secmem_size(void) { struct device_node *np;