From patchwork Thu Aug 22 10:26:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bharata B Rao X-Patchwork-Id: 1151450 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=linux.ibm.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46DggX2lLWz9sNk for ; Thu, 22 Aug 2019 20:26:48 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731817AbfHVK0s (ORCPT ); Thu, 22 Aug 2019 06:26:48 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:12664 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730918AbfHVK0r (ORCPT ); Thu, 22 Aug 2019 06:26:47 -0400 Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x7MAMd2G097442 for ; Thu, 22 Aug 2019 06:26:46 -0400 Received: from e06smtp04.uk.ibm.com (e06smtp04.uk.ibm.com [195.75.94.100]) by mx0a-001b2d01.pphosted.com with ESMTP id 2uhqhqm7t5-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Thu, 22 Aug 2019 06:26:46 -0400 Received: from localhost by e06smtp04.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 22 Aug 2019 11:26:44 +0100 Received: from b06avi18878370.portsmouth.uk.ibm.com (9.149.26.194) by e06smtp04.uk.ibm.com (192.168.101.134) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Thu, 22 Aug 2019 11:26:41 +0100 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06avi18878370.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x7MAQdxI44564908 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 22 Aug 2019 10:26:39 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6D952AE058; Thu, 22 Aug 2019 10:26:39 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1F4B6AE053; Thu, 22 Aug 2019 10:26:37 +0000 (GMT) Received: from bharata.ibmuc.com (unknown [9.199.57.57]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Thu, 22 Aug 2019 10:26:36 +0000 (GMT) From: Bharata B Rao To: linuxppc-dev@lists.ozlabs.org Cc: kvm-ppc@vger.kernel.org, linux-mm@kvack.org, paulus@au1.ibm.com, aneesh.kumar@linux.vnet.ibm.com, jglisse@redhat.com, linuxram@us.ibm.com, sukadev@linux.vnet.ibm.com, cclaudio@linux.ibm.com, hch@lst.de, Bharata B Rao Subject: [PATCH v7 2/7] kvmppc: Shared pages support for secure guests Date: Thu, 22 Aug 2019 15:56:15 +0530 X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190822102620.21897-1-bharata@linux.ibm.com> References: <20190822102620.21897-1-bharata@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 x-cbid: 19082210-0016-0000-0000-000002A16DD8 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19082210-0017-0000-0000-00003301A65C Message-Id: <20190822102620.21897-3-bharata@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-08-22_07:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=750 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1908220112 Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org A secure guest will share some of its pages with hypervisor (Eg. virtio bounce buffers etc). Support sharing of pages between hypervisor and ultravisor. Once a secure page is converted to shared page, the device page is unmapped from the HV side page tables. Signed-off-by: Bharata B Rao --- arch/powerpc/include/asm/hvcall.h | 3 ++ arch/powerpc/kvm/book3s_hv_devm.c | 70 +++++++++++++++++++++++++++++-- 2 files changed, 69 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h index 2f6b952deb0f..05b8536f6653 100644 --- a/arch/powerpc/include/asm/hvcall.h +++ b/arch/powerpc/include/asm/hvcall.h @@ -337,6 +337,9 @@ #define H_TLB_INVALIDATE 0xF808 #define H_COPY_TOFROM_GUEST 0xF80C +/* Flags for H_SVM_PAGE_IN */ +#define H_PAGE_IN_SHARED 0x1 + /* Platform-specific hcalls used by the Ultravisor */ #define H_SVM_PAGE_IN 0xEF00 #define H_SVM_PAGE_OUT 0xEF04 diff --git a/arch/powerpc/kvm/book3s_hv_devm.c b/arch/powerpc/kvm/book3s_hv_devm.c index 13722f27fa7d..6a3229b78fed 100644 --- a/arch/powerpc/kvm/book3s_hv_devm.c +++ b/arch/powerpc/kvm/book3s_hv_devm.c @@ -46,6 +46,7 @@ struct kvmppc_devm_page_pvt { unsigned long *rmap; unsigned int lpid; unsigned long gpa; + bool skip_page_out; }; /* @@ -139,6 +140,54 @@ kvmppc_devm_migrate_alloc_and_copy(struct migrate_vma *mig, return 0; } +/* + * Shares the page with HV, thus making it a normal page. + * + * - If the page is already secure, then provision a new page and share + * - If the page is a normal page, share the existing page + * + * In the former case, uses the dev_pagemap_ops migrate_to_ram handler + * to unmap the device page from QEMU's page tables. + */ +static unsigned long +kvmppc_share_page(struct kvm *kvm, unsigned long gpa, unsigned long page_shift) +{ + + int ret = H_PARAMETER; + struct page *devm_page; + struct kvmppc_devm_page_pvt *pvt; + unsigned long pfn; + unsigned long *rmap; + struct kvm_memory_slot *slot; + unsigned long gfn = gpa >> page_shift; + int srcu_idx; + + srcu_idx = srcu_read_lock(&kvm->srcu); + slot = gfn_to_memslot(kvm, gfn); + if (!slot) + goto out; + + rmap = &slot->arch.rmap[gfn - slot->base_gfn]; + if (kvmppc_rmap_is_devm_pfn(*rmap)) { + devm_page = pfn_to_page(*rmap & ~KVMPPC_RMAP_DEVM_PFN); + pvt = (struct kvmppc_devm_page_pvt *) + devm_page->zone_device_data; + pvt->skip_page_out = true; + } + + pfn = gfn_to_pfn(kvm, gpa >> page_shift); + if (is_error_noslot_pfn(pfn)) + goto out; + + ret = uv_page_in(kvm->arch.lpid, pfn << page_shift, gpa, 0, page_shift); + if (ret == U_SUCCESS) + ret = H_SUCCESS; + kvm_release_pfn_clean(pfn); +out: + srcu_read_unlock(&kvm->srcu, srcu_idx); + return ret; +} + /* * Move page from normal memory to secure memory. */ @@ -159,9 +208,12 @@ kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa, if (page_shift != PAGE_SHIFT) return H_P3; - if (flags) + if (flags & ~H_PAGE_IN_SHARED) return H_P2; + if (flags & H_PAGE_IN_SHARED) + return kvmppc_share_page(kvm, gpa, page_shift); + ret = H_PARAMETER; down_read(&kvm->mm->mmap_sem); srcu_idx = srcu_read_lock(&kvm->srcu); @@ -211,7 +263,7 @@ kvmppc_devm_fault_migrate_alloc_and_copy(struct migrate_vma *mig, struct page *dpage, *spage; struct kvmppc_devm_page_pvt *pvt; unsigned long pfn; - int ret; + int ret = U_SUCCESS; spage = migrate_pfn_to_page(*mig->src); if (!spage || !(*mig->src & MIGRATE_PFN_MIGRATE)) @@ -226,8 +278,18 @@ kvmppc_devm_fault_migrate_alloc_and_copy(struct migrate_vma *mig, pvt = spage->zone_device_data; pfn = page_to_pfn(dpage); - ret = uv_page_out(pvt->lpid, pfn << page_shift, pvt->gpa, 0, - page_shift); + + /* + * This same function is used in two cases: + * - When HV touches a secure page, for which we do page-out + * - When a secure page is converted to shared page, we touch + * the page to essentially unmap the device page. In this + * case we skip page-out. + */ + if (!pvt->skip_page_out) + ret = uv_page_out(pvt->lpid, pfn << page_shift, pvt->gpa, 0, + page_shift); + if (ret == U_SUCCESS) *mig->dst = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED; else {