From patchwork Wed Nov 21 05:28:09 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bharata B Rao X-Patchwork-Id: 1000929 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=linux.ibm.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 430B1p2318z9s7T for ; Wed, 21 Nov 2018 16:28:30 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726791AbeKUQB0 (ORCPT ); Wed, 21 Nov 2018 11:01:26 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:59386 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726778AbeKUQB0 (ORCPT ); Wed, 21 Nov 2018 11:01:26 -0500 Received: from pps.filterd (m0098410.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id wAL5SNpE130638 for ; Wed, 21 Nov 2018 00:28:28 -0500 Received: from e06smtp04.uk.ibm.com (e06smtp04.uk.ibm.com [195.75.94.100]) by mx0a-001b2d01.pphosted.com with ESMTP id 2nvwd9fjdg-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 21 Nov 2018 00:28:28 -0500 Received: from localhost by e06smtp04.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 21 Nov 2018 05:28:26 -0000 Received: from b06cxnps3074.portsmouth.uk.ibm.com (9.149.109.194) by e06smtp04.uk.ibm.com (192.168.101.134) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 21 Nov 2018 05:28:23 -0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id wAL5SLAR47382534 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 21 Nov 2018 05:28:21 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 50E284C044; Wed, 21 Nov 2018 05:28:21 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C30904C046; Wed, 21 Nov 2018 05:28:19 +0000 (GMT) Received: from bharata.in.ibm.com (unknown [9.124.35.234]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 21 Nov 2018 05:28:19 +0000 (GMT) From: Bharata B Rao To: linuxppc-dev@lists.ozlabs.org Cc: kvm-ppc@vger.kernel.org, linux-mm@kvack.org, paulus@au1.ibm.com, benh@linux.ibm.com, aneesh.kumar@linux.vnet.ibm.com, jglisse@redhat.com, linuxram@us.ibm.com, Bharata B Rao Subject: [RFC PATCH v2 2/4] kvmppc: Add support for shared pages in HMM driver Date: Wed, 21 Nov 2018 10:58:09 +0530 X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181121052811.4819-1-bharata@linux.ibm.com> References: <20181121052811.4819-1-bharata@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 18112105-0016-0000-0000-0000022A3B53 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18112105-0017-0000-0000-0000328278C3 Message-Id: <20181121052811.4819-3-bharata@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2018-11-21_03:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=814 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1811210050 Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org A secure guest will share some of its pages with hypervisor (Eg. virtio bounce buffers etc). Support shared pages in HMM driver. Signed-off-by: Bharata B Rao --- arch/powerpc/include/asm/hvcall.h | 3 ++ arch/powerpc/kvm/book3s_hv_hmm.c | 58 +++++++++++++++++++++++++++++-- 2 files changed, 58 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h index c900f47c0a9f..34791c627f87 100644 --- a/arch/powerpc/include/asm/hvcall.h +++ b/arch/powerpc/include/asm/hvcall.h @@ -336,6 +336,9 @@ #define H_ENTER_NESTED 0xF804 #define H_TLB_INVALIDATE 0xF808 +/* Flags for H_SVM_PAGE_IN */ +#define H_PAGE_IN_SHARED 0x1 + /* Platform-specific hcalls used by the Ultravisor */ #define H_SVM_PAGE_IN 0xFF00 #define H_SVM_PAGE_OUT 0xFF04 diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c index 5f2a924a4f16..2730ab832330 100644 --- a/arch/powerpc/kvm/book3s_hv_hmm.c +++ b/arch/powerpc/kvm/book3s_hv_hmm.c @@ -45,6 +45,7 @@ struct kvmppc_hmm_page_pvt { unsigned long *rmap; unsigned int lpid; unsigned long gpa; + bool skip_page_out; }; struct kvmppc_hmm_migrate_args { @@ -212,6 +213,45 @@ static const struct migrate_vma_ops kvmppc_hmm_migrate_ops = { .finalize_and_map = kvmppc_hmm_migrate_finalize_and_map, }; +/* + * Shares the page with HV, thus making it a normal page. + * + * - If the page is already secure, then provision a new page and share + * - If the page is a normal page, share the existing page + * + * In the former case, uses the HMM fault handler to release the HMM page. + */ +static unsigned long +kvmppc_share_page(struct kvm *kvm, unsigned long *rmap, unsigned long gpa, + unsigned long addr, unsigned long page_shift) +{ + + int ret; + unsigned int lpid = kvm->arch.lpid; + struct page *hmm_page; + struct kvmppc_hmm_page_pvt *pvt; + unsigned long pfn; + int srcu_idx; + + if (kvmppc_is_hmm_pfn(*rmap)) { + hmm_page = pfn_to_page(*rmap & ~KVMPPC_PFN_HMM); + pvt = (struct kvmppc_hmm_page_pvt *) + hmm_devmem_page_get_drvdata(hmm_page); + pvt->skip_page_out = true; + } + + srcu_idx = srcu_read_lock(&kvm->srcu); + pfn = gfn_to_pfn(kvm, gpa >> page_shift); + srcu_read_unlock(&kvm->srcu, srcu_idx); + if (is_error_noslot_pfn(pfn)) + return H_PARAMETER; + + ret = uv_page_in(lpid, pfn << page_shift, gpa, 0, page_shift); + kvm_release_pfn_clean(pfn); + + return (ret == U_SUCCESS) ? H_SUCCESS : H_PARAMETER; +} + /* * Move page from normal memory to secure memory. */ @@ -243,9 +283,12 @@ kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa, end = addr + (1UL << page_shift); - if (flags) + if (flags & ~H_PAGE_IN_SHARED) return H_P2; + if (flags & H_PAGE_IN_SHARED) + return kvmppc_share_page(kvm, rmap, gpa, addr, page_shift); + args.rmap = rmap; args.lpid = kvm->arch.lpid; args.gpa = gpa; @@ -292,8 +335,17 @@ kvmppc_hmm_fault_migrate_alloc_and_copy(struct vm_area_struct *vma, hmm_devmem_page_get_drvdata(spage); pfn = page_to_pfn(dpage); - ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT, - pvt->gpa, 0, PAGE_SHIFT); + + /* + * This same alloc_and_copy() callback is used in two cases: + * - When HV touches a secure page, for which we do page-out + * - When a secure page is converted to shared page, we touch + * the page to essentially discard the HMM page. In this case we + * skip page-out. + */ + if (!pvt->skip_page_out) + ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT, + pvt->gpa, 0, PAGE_SHIFT); if (ret == U_SUCCESS) *dst_pfn = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED; }