From patchwork Mon Mar 28 19:25:58 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Scott Wood X-Patchwork-Id: 88671 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from ozlabs.org (localhost [127.0.0.1]) by ozlabs.org (Postfix) with ESMTP id F3D1810137F for ; Tue, 29 Mar 2011 06:26:58 +1100 (EST) Received: from VA3EHSOBE005.bigfish.com (va3ehsobe005.messaging.microsoft.com [216.32.180.31]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (Client CN "mail.global.frontbridge.com", Issuer "Cybertrust SureServer Standard Validation CA" (verified OK)) by ozlabs.org (Postfix) with ESMTPS id 3F8B4B7274 for ; Tue, 29 Mar 2011 06:26:13 +1100 (EST) Received: from mail9-va3-R.bigfish.com (10.7.14.238) by VA3EHSOBE005.bigfish.com (10.7.40.25) with Microsoft SMTP Server id 14.1.225.8; Mon, 28 Mar 2011 19:26:07 +0000 Received: from mail9-va3 (localhost.localdomain [127.0.0.1]) by mail9-va3-R.bigfish.com (Postfix) with ESMTP id 6F3FA16E81B4; Mon, 28 Mar 2011 19:26:07 +0000 (UTC) X-SpamScore: 16 X-BigFish: VS16(z35f3ikzbb2cKzz1202hzz8275bhz2dh2a8h637h668h61h) X-Spam-TCS-SCL: 0:0 X-Forefront-Antispam-Report: KIP:(null); UIP:(null); IPVD:NLI; H:mail.freescale.net; RD:none; EFVD:NLI Received: from mail9-va3 (localhost.localdomain [127.0.0.1]) by mail9-va3 (MessageSwitch) id 1301340367176884_32118; Mon, 28 Mar 2011 19:26:07 +0000 (UTC) Received: from VA3EHSMHS024.bigfish.com (unknown [10.7.14.235]) by mail9-va3.bigfish.com (Postfix) with ESMTP id 1D4121B48046; Mon, 28 Mar 2011 19:26:07 +0000 (UTC) Received: from mail.freescale.net (70.37.183.190) by VA3EHSMHS024.bigfish.com (10.7.99.34) with Microsoft SMTP Server (TLS) id 14.1.225.8; Mon, 28 Mar 2011 19:25:59 +0000 Received: from az33smr01.freescale.net (10.64.34.199) by 039-SN1MMR1-003.039d.mgd.msft.net (10.84.1.16) with Microsoft SMTP Server id 14.1.270.2; Mon, 28 Mar 2011 14:25:58 -0500 Received: from schlenkerla.am.freescale.net (schlenkerla.am.freescale.net [10.82.120.180]) by az33smr01.freescale.net (8.13.1/8.13.0) with ESMTP id p2SJPwok027566; Mon, 28 Mar 2011 14:25:58 -0500 (CDT) Date: Mon, 28 Mar 2011 14:25:58 -0500 From: Scott Wood To: Subject: [PATCH 3/4] KVM: PPC: e500: Introduce msr_block for e500v2 Message-ID: <20110328192558.GB11104@schlenkerla.am.freescale.net> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20110328192454.GA11064@schlenkerla.am.freescale.net> User-Agent: Mutt/1.5.20 (2009-06-14) X-OriginatorOrg: freescale.com Cc: linuxppc-dev@lists.ozlabs.org X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org From: yu liu In order to use lazy SPE register save/restore, we need to know when the guest is using MSR[SPE]. In order to do that, we need to control the actual MSR[SPE] separately from the guest's notion of MSR[SPE]. Only bits set in msr_block can be changed by the guest in the real MSR. Signed-off-by: Liu Yu Signed-off-by: Scott Wood --- arch/powerpc/include/asm/kvm_host.h | 3 +++ arch/powerpc/kernel/asm-offsets.c | 3 +++ arch/powerpc/kvm/booke.h | 17 +++++++++++++++++ arch/powerpc/kvm/booke_interrupts.S | 6 +++++- arch/powerpc/kvm/e500.c | 3 +++ 5 files changed, 31 insertions(+), 1 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index bba3b9b..c376f6b 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -217,6 +217,9 @@ struct kvm_vcpu_arch { ulong xer; u32 cr; #endif +#ifdef CONFIG_FSL_BOOKE + ulong msr_block; +#endif #ifdef CONFIG_PPC_BOOK3S ulong shadow_msr; diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c index 23e6a93..75b72c7 100644 --- a/arch/powerpc/kernel/asm-offsets.c +++ b/arch/powerpc/kernel/asm-offsets.c @@ -403,6 +403,9 @@ int main(void) DEFINE(VCPU_SHARED, offsetof(struct kvm_vcpu, arch.shared)); DEFINE(VCPU_SHARED_MSR, offsetof(struct kvm_vcpu_arch_shared, msr)); +#ifdef CONFIG_FSL_BOOKE + DEFINE(VCPU_MSR_BLOCK, offsetof(struct kvm_vcpu, arch.msr_block)); +#endif /* book3s */ #ifdef CONFIG_PPC_BOOK3S DEFINE(VCPU_HOST_RETIP, offsetof(struct kvm_vcpu, arch.host_retip)); diff --git a/arch/powerpc/kvm/booke.h b/arch/powerpc/kvm/booke.h index 492bb70..303a415 100644 --- a/arch/powerpc/kvm/booke.h +++ b/arch/powerpc/kvm/booke.h @@ -52,6 +52,23 @@ extern unsigned long kvmppc_booke_handlers; +#ifdef CONFIG_FSL_BOOKE +static inline bool kvmppc_msr_block_has(struct kvm_vcpu *vcpu, u32 block_bit) +{ + return !(vcpu->arch.msr_block & block_bit); +} + +static inline void kvmppc_set_msr_block(struct kvm_vcpu *vcpu, u32 block_bit) +{ + vcpu->arch.msr_block &= ~block_bit; +} + +static inline void kvmppc_clr_msr_block(struct kvm_vcpu *vcpu, u32 block_bit) +{ + vcpu->arch.msr_block |= block_bit; +} +#endif + /* Helper function for "full" MSR writes. No need to call this if only EE is * changing. */ static inline void kvmppc_set_msr(struct kvm_vcpu *vcpu, u32 new_msr) diff --git a/arch/powerpc/kvm/booke_interrupts.S b/arch/powerpc/kvm/booke_interrupts.S index ab29f5f..92193c7 100644 --- a/arch/powerpc/kvm/booke_interrupts.S +++ b/arch/powerpc/kvm/booke_interrupts.S @@ -409,7 +409,6 @@ lightweight_exit: mtctr r3 lwz r3, VCPU_CR(r4) mtcr r3 - lwz r5, VCPU_GPR(r5)(r4) lwz r6, VCPU_GPR(r6)(r4) lwz r7, VCPU_GPR(r7)(r4) lwz r8, VCPU_GPR(r8)(r4) @@ -419,6 +418,11 @@ lightweight_exit: lwz r3, (VCPU_SHARED_MSR + 4)(r3) oris r3, r3, KVMPPC_MSR_MASK@h ori r3, r3, KVMPPC_MSR_MASK@l +#ifdef CONFIG_FSL_BOOKE + lwz r5, VCPU_MSR_BLOCK(r4) + and r3, r3, r5 +#endif + lwz r5, VCPU_GPR(r5)(r4) mtsrr1 r3 /* Clear any debug events which occurred since we disabled MSR[DE]. diff --git a/arch/powerpc/kvm/e500.c b/arch/powerpc/kvm/e500.c index e762634..acfe052 100644 --- a/arch/powerpc/kvm/e500.c +++ b/arch/powerpc/kvm/e500.c @@ -67,6 +67,9 @@ int kvmppc_core_vcpu_setup(struct kvm_vcpu *vcpu) /* Since booke kvm only support one core, update all vcpus' PIR to 0 */ vcpu->vcpu_id = 0; + /* Unblock all msr bits */ + kvmppc_clr_msr_block(vcpu, ~0UL); + return 0; }