From patchwork Thu Feb 20 16:30:21 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mihai Caraman X-Patchwork-Id: 322253 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 67B3C2C01AB for ; Fri, 21 Feb 2014 03:30:43 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755194AbaBTQam (ORCPT ); Thu, 20 Feb 2014 11:30:42 -0500 Received: from va3ehsobe004.messaging.microsoft.com ([216.32.180.14]:54630 "EHLO va3outboundpool.messaging.microsoft.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754988AbaBTQak (ORCPT ); Thu, 20 Feb 2014 11:30:40 -0500 Received: from mail30-va3-R.bigfish.com (10.7.14.227) by VA3EHSOBE011.bigfish.com (10.7.40.61) with Microsoft SMTP Server id 14.1.225.22; Thu, 20 Feb 2014 16:30:39 +0000 Received: from mail30-va3 (localhost [127.0.0.1]) by mail30-va3-R.bigfish.com (Postfix) with ESMTP id C7120401E0; Thu, 20 Feb 2014 16:30:39 +0000 (UTC) X-Forefront-Antispam-Report: CIP:70.37.183.190; KIP:(null); UIP:(null); IPV:NLI; H:mail.freescale.net; RD:none; EFVD:NLI X-SpamScore: 0 X-BigFish: VS0(zzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh2a8h839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah1b2fh2222h224fh1fb3h1d0ch1d2eh1d3fh1dfeh1dffh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h255eh1155h) Received: from mail30-va3 (localhost.localdomain [127.0.0.1]) by mail30-va3 (MessageSwitch) id 139291383770888_1727; Thu, 20 Feb 2014 16:30:37 +0000 (UTC) Received: from VA3EHSMHS016.bigfish.com (unknown [10.7.14.229]) by mail30-va3.bigfish.com (Postfix) with ESMTP id 052A53A004F; Thu, 20 Feb 2014 16:30:37 +0000 (UTC) Received: from mail.freescale.net (70.37.183.190) by VA3EHSMHS016.bigfish.com (10.7.99.26) with Microsoft SMTP Server (TLS) id 14.16.227.3; Thu, 20 Feb 2014 16:30:35 +0000 Received: from az84smr01.freescale.net (10.64.34.197) by 039-SN1MMR1-004.039d.mgd.msft.net (10.84.1.14) with Microsoft SMTP Server (TLS) id 14.3.158.2; Thu, 20 Feb 2014 16:30:35 +0000 Received: from fsr-fed1364-13.ea.freescale.net (fsr-fed1364-13.ea.freescale.net [10.171.81.124]) by az84smr01.freescale.net (8.14.3/8.14.0) with ESMTP id s1KGUOIR009773; Thu, 20 Feb 2014 09:30:34 -0700 From: Mihai Caraman To: CC: , , Mihai Caraman Subject: [PATCH 4/4] KVM: PPC: Bookehv: Get vcpu's last instruction for emulation Date: Thu, 20 Feb 2014 18:30:21 +0200 Message-ID: <1392913821-4520-4-git-send-email-mihai.caraman@freescale.com> X-Mailer: git-send-email 1.7.3.4 In-Reply-To: <1392913821-4520-1-git-send-email-mihai.caraman@freescale.com> References: <1392913821-4520-1-git-send-email-mihai.caraman@freescale.com> MIME-Version: 1.0 X-OriginatorOrg: freescale.com X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn% X-FOPE-CONNECTOR: Id%0$Dn%FREESCALE.MAIL.ONMICROSOFT.COM$RO%1$TLS%0$FQDN%$TlsDn% Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Load external pid (lwepx) instruction faults (when called from KVM with guest context) needs to be handled by KVM. This implies additional code in DO_KVM macro to identify the source of the exception (which oiginate from KVM host rather than the guest). The hook requires to check the Exception Syndrome Register ESR[EPID] and External PID Load Context Register EPLC[EGS] for some exceptions (DTLB_MISS, DSI and LRAT). Doing this on Data TLB miss exception is obvious intrusive for the host. Get rid of lwepx and acquire last instuction in kvmppc_get_last_inst() by searching for the physical address and kmap it. This fixes an infinite loop caused by lwepx's data TLB miss handled in the host and the TODO for TLB eviction and execute-but-not-read entries. Signed-off-by: Mihai Caraman --- arch/powerpc/kvm/bookehv_interrupts.S | 37 +++---------- arch/powerpc/kvm/e500_mmu_host.c | 93 +++++++++++++++++++++++++++++++++ 2 files changed, 102 insertions(+), 28 deletions(-) diff --git a/arch/powerpc/kvm/bookehv_interrupts.S b/arch/powerpc/kvm/bookehv_interrupts.S index 20c7a54..c50490c 100644 --- a/arch/powerpc/kvm/bookehv_interrupts.S +++ b/arch/powerpc/kvm/bookehv_interrupts.S @@ -119,38 +119,14 @@ 1: .if \flags & NEED_EMU - /* - * This assumes you have external PID support. - * To support a bookehv CPU without external PID, you'll - * need to look up the TLB entry and create a temporary mapping. - * - * FIXME: we don't currently handle if the lwepx faults. PR-mode - * booke doesn't handle it either. Since Linux doesn't use - * broadcast tlbivax anymore, the only way this should happen is - * if the guest maps its memory execute-but-not-read, or if we - * somehow take a TLB miss in the middle of this entry code and - * evict the relevant entry. On e500mc, all kernel lowmem is - * bolted into TLB1 large page mappings, and we don't use - * broadcast invalidates, so we should not take a TLB miss here. - * - * Later we'll need to deal with faults here. Disallowing guest - * mappings that are execute-but-not-read could be an option on - * e500mc, but not on chips with an LRAT if it is used. - */ - - mfspr r3, SPRN_EPLC /* will already have correct ELPID and EGS */ PPC_STL r15, VCPU_GPR(R15)(r4) PPC_STL r16, VCPU_GPR(R16)(r4) PPC_STL r17, VCPU_GPR(R17)(r4) PPC_STL r18, VCPU_GPR(R18)(r4) PPC_STL r19, VCPU_GPR(R19)(r4) - mr r8, r3 PPC_STL r20, VCPU_GPR(R20)(r4) - rlwimi r8, r6, EPC_EAS_SHIFT - MSR_IR_LG, EPC_EAS PPC_STL r21, VCPU_GPR(R21)(r4) - rlwimi r8, r6, EPC_EPR_SHIFT - MSR_PR_LG, EPC_EPR PPC_STL r22, VCPU_GPR(R22)(r4) - rlwimi r8, r10, EPC_EPID_SHIFT, EPC_EPID PPC_STL r23, VCPU_GPR(R23)(r4) PPC_STL r24, VCPU_GPR(R24)(r4) PPC_STL r25, VCPU_GPR(R25)(r4) @@ -160,10 +136,15 @@ PPC_STL r29, VCPU_GPR(R29)(r4) PPC_STL r30, VCPU_GPR(R30)(r4) PPC_STL r31, VCPU_GPR(R31)(r4) - mtspr SPRN_EPLC, r8 - isync - lwepx r9, 0, r5 - mtspr SPRN_EPLC, r3 + + /* + * We don't use external PID support. lwepx faults would need to be + * handled by KVM and this implies aditional code in DO_KVM (for + * DTB_MISS, DSI and LRAT) to check ESR[EPID] and EPLC[EGS] which + * is too intrusive for the host. Get last instuction in + * kvmppc_get_last_inst(). + */ + li r9, KVM_INST_FETCH_FAILED stw r9, VCPU_LAST_INST(r4) .endif diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c index 6025cb7..1b4cb41 100644 --- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -598,9 +598,102 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 eaddr, gpa_t gpaddr, } } +#ifdef CONFIG_KVM_BOOKE_HV +int kvmppc_ld_inst(struct kvm_vcpu *vcpu, u32 *instr) +{ + gva_t geaddr; + hpa_t addr; + hfn_t pfn; + hva_t eaddr; + u32 mas0, mas1, mas2, mas3; + u64 mas7_mas3; + struct page *page; + unsigned int addr_space, psize_shift; + bool pr; + unsigned long flags; + + /* Search TLB for guest pc to get the real address */ + geaddr = kvmppc_get_pc(vcpu); + addr_space = (vcpu->arch.shared->msr & MSR_IS) >> MSR_IR_LG; + + local_irq_save(flags); + mtspr(SPRN_MAS6, (vcpu->arch.pid << MAS6_SPID_SHIFT) | addr_space); + mtspr(SPRN_MAS5, MAS5_SGS | vcpu->kvm->arch.lpid); + isync(); + asm volatile("tlbsx 0, %[geaddr]\n" : : [geaddr] "r" (geaddr)); + mtspr(SPRN_MAS5, 0); + mtspr(SPRN_MAS8, 0); + mas0 = mfspr(SPRN_MAS0); + mas1 = mfspr(SPRN_MAS1); + mas2 = mfspr(SPRN_MAS2); + mas3 = mfspr(SPRN_MAS3); + mas7_mas3 = (((u64) mfspr(SPRN_MAS7)) << 32) | mfspr(SPRN_MAS3); + local_irq_restore(flags); + + /* + * If the TLB entry for guest pc was evicted, return to the guest. + * There are high chances to find a valid TLB entry next time. + */ + if (!(mas1 & MAS1_VALID)) + return EMULATE_AGAIN; + + /* + * Another thread may rewrite the TLB entry in parallel, don't + * execute from the address if the execute permission is not set + */ + pr = vcpu->arch.shared->msr & MSR_PR; + if ((pr && (!(mas3 & MAS3_UX))) || ((!pr) && (!(mas3 & MAS3_SX)))) { + kvmppc_core_queue_inst_storage(vcpu, 0); + return EMULATE_AGAIN; + } + + /* + * We will map the real address through a cacheable page, so we will + * not support cache-inhibited guest pages. Fortunately emulated + * instructions should not live there. + */ + if (mas2 & MAS2_I) { + printk(KERN_CRIT "Instuction emulation from cache-inhibited " + "guest pages is not supported\n"); + return EMULATE_FAIL; + } + + /* Get page size */ + if (MAS0_GET_TLBSEL(mas0) == 0) + psize_shift = PAGE_SHIFT; + else + psize_shift = MAS1_GET_TSIZE(mas1) + 10; + + /* Map a page and get guest's instruction */ + addr = (mas7_mas3 & (~0ULL << psize_shift)) | + (geaddr & ((1ULL << psize_shift) - 1ULL)); + pfn = addr >> PAGE_SHIFT; + + if (unlikely(!pfn_valid(pfn))) { + printk(KERN_CRIT "Invalid frame number\n"); + return EMULATE_FAIL; + } + + /* Guard us against emulation from devices area */ + if (unlikely(!page_is_ram(pfn))) { + printk(KERN_CRIT "Instruction emulation from non-RAM host " + "pages is not supported\n"); + return EMULATE_FAIL; + } + + page = pfn_to_page(pfn); + eaddr = (unsigned long)kmap_atomic(page); + eaddr |= addr & ~PAGE_MASK; + *instr = *(u32 *)eaddr; + kunmap_atomic((u32 *)eaddr); + + return EMULATE_DONE; +} +#else int kvmppc_ld_inst(struct kvm_vcpu *vcpu, u32 *instr) { return EMULATE_FAIL; }; +#endif /************* MMU Notifiers *************/