From patchwork Wed Jan 4 01:10:01 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Graf X-Patchwork-Id: 134171 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 141FB1007D8 for ; Wed, 4 Jan 2012 11:58:42 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756241Ab2ADA6f (ORCPT ); Tue, 3 Jan 2012 19:58:35 -0500 Received: from cantor2.suse.de ([195.135.220.15]:44938 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754985Ab2ADA4Y (ORCPT ); Tue, 3 Jan 2012 19:56:24 -0500 Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.221.2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx2.suse.de (Postfix) with ESMTP id BD5888C5DF; Wed, 4 Jan 2012 01:56:22 +0100 (CET) From: Alexander Graf To: kvm-ppc@vger.kernel.org Cc: kvm list , Avi Kivity , Marcelo Tosatti , Scott Wood Subject: [PATCH 03/50] KVM: PPC: e500: clear up confusion between host and guest entries Date: Wed, 4 Jan 2012 02:10:01 +0100 Message-Id: <1325639448-9494-4-git-send-email-agraf@suse.de> X-Mailer: git-send-email 1.7.3.4 In-Reply-To: <1325639448-9494-1-git-send-email-agraf@suse.de> References: <1325639448-9494-1-git-send-email-agraf@suse.de> Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org From: Scott Wood Split out the portions of tlbe_priv that should be associated with host entries into tlbe_ref. Base victim selection on the number of hardware entries, not guest entries. For TLB1, where one guest entry can be mapped by multiple host entries, we use the host tlbe_ref for tracking page references. For the guest TLB0 entries, we still track it with gtlb_priv, to avoid having to retranslate if the entry is evicted from the host TLB but not the guest TLB. Signed-off-by: Scott Wood Signed-off-by: Alexander Graf --- arch/powerpc/include/asm/kvm_e500.h | 24 +++- arch/powerpc/include/asm/mmu-book3e.h | 1 + arch/powerpc/kvm/e500_tlb.c | 267 +++++++++++++++++++++++---------- arch/powerpc/kvm/e500_tlb.h | 17 -- 4 files changed, 213 insertions(+), 96 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_e500.h b/arch/powerpc/include/asm/kvm_e500.h index adbfca9..a5197d8 100644 --- a/arch/powerpc/include/asm/kvm_e500.h +++ b/arch/powerpc/include/asm/kvm_e500.h @@ -32,13 +32,21 @@ struct tlbe{ #define E500_TLB_VALID 1 #define E500_TLB_DIRTY 2 -struct tlbe_priv { +struct tlbe_ref { pfn_t pfn; unsigned int flags; /* E500_TLB_* */ }; +struct tlbe_priv { + struct tlbe_ref ref; /* TLB0 only -- TLB1 uses tlb_refs */ +}; + struct vcpu_id_table; +struct kvmppc_e500_tlb_params { + int entries, ways, sets; +}; + struct kvmppc_vcpu_e500 { /* Unmodified copy of the guest's TLB. */ struct tlbe *gtlb_arch[E500_TLB_NUM]; @@ -49,6 +57,20 @@ struct kvmppc_vcpu_e500 { unsigned int gtlb_size[E500_TLB_NUM]; unsigned int gtlb_nv[E500_TLB_NUM]; + /* + * information associated with each host TLB entry -- + * TLB1 only for now. If/when guest TLB1 entries can be + * mapped with host TLB0, this will be used for that too. + * + * We don't want to use this for guest TLB0 because then we'd + * have the overhead of doing the translation again even if + * the entry is still in the guest TLB (e.g. we swapped out + * and back, and our host TLB entries got evicted). + */ + struct tlbe_ref *tlb_refs[E500_TLB_NUM]; + + unsigned int host_tlb1_nv; + u32 host_pid[E500_PID_NUM]; u32 pid[E500_PID_NUM]; u32 svr; diff --git a/arch/powerpc/include/asm/mmu-book3e.h b/arch/powerpc/include/asm/mmu-book3e.h index 0260ea5..adbf5b4 100644 --- a/arch/powerpc/include/asm/mmu-book3e.h +++ b/arch/powerpc/include/asm/mmu-book3e.h @@ -167,6 +167,7 @@ #define TLBnCFG_MAXSIZE 0x000f0000 /* Maximum Page Size (v1.0) */ #define TLBnCFG_MAXSIZE_SHIFT 16 #define TLBnCFG_ASSOC 0xff000000 /* Associativity */ +#define TLBnCFG_ASSOC_SHIFT 24 /* TLBnPS encoding */ #define TLBnPS_4K 0x00000004 diff --git a/arch/powerpc/kvm/e500_tlb.c b/arch/powerpc/kvm/e500_tlb.c index b976d80..59221bb 100644 --- a/arch/powerpc/kvm/e500_tlb.c +++ b/arch/powerpc/kvm/e500_tlb.c @@ -12,6 +12,7 @@ * published by the Free Software Foundation. */ +#include #include #include #include @@ -26,7 +27,7 @@ #include "trace.h" #include "timing.h" -#define to_htlb1_esel(esel) (tlb1_entry_num - (esel) - 1) +#define to_htlb1_esel(esel) (host_tlb_params[1].entries - (esel) - 1) struct id { unsigned long val; @@ -63,7 +64,7 @@ static DEFINE_PER_CPU(struct pcpu_id_table, pcpu_sids); * The valid range of shadow ID is [1..255] */ static DEFINE_PER_CPU(unsigned long, pcpu_last_used_sid); -static unsigned int tlb1_entry_num; +static struct kvmppc_e500_tlb_params host_tlb_params[E500_TLB_NUM]; /* * Allocate a free shadow id and setup a valid sid mapping in given entry. @@ -237,7 +238,7 @@ void kvmppc_dump_tlbs(struct kvm_vcpu *vcpu) } } -static inline unsigned int tlb0_get_next_victim( +static inline unsigned int gtlb0_get_next_victim( struct kvmppc_vcpu_e500 *vcpu_e500) { unsigned int victim; @@ -252,7 +253,7 @@ static inline unsigned int tlb0_get_next_victim( static inline unsigned int tlb1_max_shadow_size(void) { /* reserve one entry for magic page */ - return tlb1_entry_num - tlbcam_index - 1; + return host_tlb_params[1].entries - tlbcam_index - 1; } static inline int tlbe_is_writable(struct tlbe *tlbe) @@ -302,13 +303,12 @@ static inline void __write_host_tlbe(struct tlbe *stlbe, uint32_t mas0) local_irq_restore(flags); } +/* esel is index into set, not whole array */ static inline void write_host_tlbe(struct kvmppc_vcpu_e500 *vcpu_e500, int tlbsel, int esel, struct tlbe *stlbe) { if (tlbsel == 0) { - __write_host_tlbe(stlbe, - MAS0_TLBSEL(0) | - MAS0_ESEL(esel & (KVM_E500_TLB0_WAY_NUM - 1))); + __write_host_tlbe(stlbe, MAS0_TLBSEL(0) | MAS0_ESEL(esel)); } else { __write_host_tlbe(stlbe, MAS0_TLBSEL(1) | @@ -355,8 +355,8 @@ void kvmppc_e500_tlb_put(struct kvm_vcpu *vcpu) { } -static void kvmppc_e500_stlbe_invalidate(struct kvmppc_vcpu_e500 *vcpu_e500, - int tlbsel, int esel) +static void inval_gtlbe_on_host(struct kvmppc_vcpu_e500 *vcpu_e500, + int tlbsel, int esel) { struct tlbe *gtlbe = &vcpu_e500->gtlb_arch[tlbsel][esel]; struct vcpu_id_table *idt = vcpu_e500->idt; @@ -412,18 +412,53 @@ static void kvmppc_e500_stlbe_invalidate(struct kvmppc_vcpu_e500 *vcpu_e500, preempt_enable(); } +static int tlb0_set_base(gva_t addr, int sets, int ways) +{ + int set_base; + + set_base = (addr >> PAGE_SHIFT) & (sets - 1); + set_base *= ways; + + return set_base; +} + +static int gtlb0_set_base(struct kvmppc_vcpu_e500 *vcpu_e500, gva_t addr) +{ + int sets = KVM_E500_TLB0_SIZE / KVM_E500_TLB0_WAY_NUM; + + return tlb0_set_base(addr, sets, KVM_E500_TLB0_WAY_NUM); +} + +static int htlb0_set_base(gva_t addr) +{ + return tlb0_set_base(addr, host_tlb_params[0].sets, + host_tlb_params[0].ways); +} + +static unsigned int get_tlb_esel(struct kvmppc_vcpu_e500 *vcpu_e500, int tlbsel) +{ + unsigned int esel = get_tlb_esel_bit(vcpu_e500); + + if (tlbsel == 0) { + esel &= KVM_E500_TLB0_WAY_NUM_MASK; + esel += gtlb0_set_base(vcpu_e500, vcpu_e500->mas2); + } else { + esel &= vcpu_e500->gtlb_size[tlbsel] - 1; + } + + return esel; +} + /* Search the guest TLB for a matching entry. */ static int kvmppc_e500_tlb_index(struct kvmppc_vcpu_e500 *vcpu_e500, gva_t eaddr, int tlbsel, unsigned int pid, int as) { int size = vcpu_e500->gtlb_size[tlbsel]; - int set_base; + unsigned int set_base; int i; if (tlbsel == 0) { - int mask = size / KVM_E500_TLB0_WAY_NUM - 1; - set_base = (eaddr >> PAGE_SHIFT) & mask; - set_base *= KVM_E500_TLB0_WAY_NUM; + set_base = gtlb0_set_base(vcpu_e500, eaddr); size = KVM_E500_TLB0_WAY_NUM; } else { set_base = 0; @@ -455,29 +490,55 @@ static int kvmppc_e500_tlb_index(struct kvmppc_vcpu_e500 *vcpu_e500, return -1; } -static inline void kvmppc_e500_priv_setup(struct tlbe_priv *priv, - struct tlbe *gtlbe, - pfn_t pfn) +static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref, + struct tlbe *gtlbe, + pfn_t pfn) { - priv->pfn = pfn; - priv->flags = E500_TLB_VALID; + ref->pfn = pfn; + ref->flags = E500_TLB_VALID; if (tlbe_is_writable(gtlbe)) - priv->flags |= E500_TLB_DIRTY; + ref->flags |= E500_TLB_DIRTY; } -static inline void kvmppc_e500_priv_release(struct tlbe_priv *priv) +static inline void kvmppc_e500_ref_release(struct tlbe_ref *ref) { - if (priv->flags & E500_TLB_VALID) { - if (priv->flags & E500_TLB_DIRTY) - kvm_release_pfn_dirty(priv->pfn); + if (ref->flags & E500_TLB_VALID) { + if (ref->flags & E500_TLB_DIRTY) + kvm_release_pfn_dirty(ref->pfn); else - kvm_release_pfn_clean(priv->pfn); + kvm_release_pfn_clean(ref->pfn); + + ref->flags = 0; + } +} + +static void clear_tlb_privs(struct kvmppc_vcpu_e500 *vcpu_e500) +{ + int tlbsel = 0; + int i; - priv->flags = 0; + for (i = 0; i < vcpu_e500->gtlb_size[tlbsel]; i++) { + struct tlbe_ref *ref = + &vcpu_e500->gtlb_priv[tlbsel][i].ref; + kvmppc_e500_ref_release(ref); } } +static void clear_tlb_refs(struct kvmppc_vcpu_e500 *vcpu_e500) +{ + int stlbsel = 1; + int i; + + for (i = 0; i < host_tlb_params[stlbsel].entries; i++) { + struct tlbe_ref *ref = + &vcpu_e500->tlb_refs[stlbsel][i]; + kvmppc_e500_ref_release(ref); + } + + clear_tlb_privs(vcpu_e500); +} + static inline void kvmppc_e500_deliver_tlb_miss(struct kvm_vcpu *vcpu, unsigned int eaddr, int as) { @@ -487,7 +548,7 @@ static inline void kvmppc_e500_deliver_tlb_miss(struct kvm_vcpu *vcpu, /* since we only have two TLBs, only lower bit is used. */ tlbsel = (vcpu_e500->mas4 >> 28) & 0x1; - victim = (tlbsel == 0) ? tlb0_get_next_victim(vcpu_e500) : 0; + victim = (tlbsel == 0) ? gtlb0_get_next_victim(vcpu_e500) : 0; pidsel = (vcpu_e500->mas4 >> 16) & 0xf; tsized = (vcpu_e500->mas4 >> 7) & 0x1f; @@ -508,10 +569,12 @@ static inline void kvmppc_e500_deliver_tlb_miss(struct kvm_vcpu *vcpu, /* TID must be supplied by the caller */ static inline void kvmppc_e500_setup_stlbe(struct kvmppc_vcpu_e500 *vcpu_e500, struct tlbe *gtlbe, int tsize, - struct tlbe_priv *priv, + struct tlbe_ref *ref, u64 gvaddr, struct tlbe *stlbe) { - pfn_t pfn = priv->pfn; + pfn_t pfn = ref->pfn; + + BUG_ON(!(ref->flags & E500_TLB_VALID)); /* Force TS=1 IPROT=0 for all guest mappings. */ stlbe->mas1 = MAS1_TSIZE(tsize) | MAS1_TS | MAS1_VALID; @@ -524,16 +587,15 @@ static inline void kvmppc_e500_setup_stlbe(struct kvmppc_vcpu_e500 *vcpu_e500, stlbe->mas7 = (pfn >> (32 - PAGE_SHIFT)) & MAS7_RPN; } - +/* sesel is an index into the entire array, not just the set */ static inline void kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500, - u64 gvaddr, gfn_t gfn, struct tlbe *gtlbe, int tlbsel, int esel, - struct tlbe *stlbe) + u64 gvaddr, gfn_t gfn, struct tlbe *gtlbe, int tlbsel, int sesel, + struct tlbe *stlbe, struct tlbe_ref *ref) { struct kvm_memory_slot *slot; unsigned long pfn, hva; int pfnmap = 0; int tsize = BOOK3E_PAGESZ_4K; - struct tlbe_priv *priv; /* * Translate guest physical to true physical, acquiring @@ -629,12 +691,11 @@ static inline void kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500, } } - /* Drop old priv and setup new one. */ - priv = &vcpu_e500->gtlb_priv[tlbsel][esel]; - kvmppc_e500_priv_release(priv); - kvmppc_e500_priv_setup(priv, gtlbe, pfn); + /* Drop old ref and setup new one. */ + kvmppc_e500_ref_release(ref); + kvmppc_e500_ref_setup(ref, gtlbe, pfn); - kvmppc_e500_setup_stlbe(vcpu_e500, gtlbe, tsize, priv, gvaddr, stlbe); + kvmppc_e500_setup_stlbe(vcpu_e500, gtlbe, tsize, ref, gvaddr, stlbe); } /* XXX only map the one-one case, for now use TLB0 */ @@ -642,14 +703,22 @@ static int kvmppc_e500_tlb0_map(struct kvmppc_vcpu_e500 *vcpu_e500, int esel, struct tlbe *stlbe) { struct tlbe *gtlbe; + struct tlbe_ref *ref; + int sesel = esel & (host_tlb_params[0].ways - 1); + int sesel_base; + gva_t ea; gtlbe = &vcpu_e500->gtlb_arch[0][esel]; + ref = &vcpu_e500->gtlb_priv[0][esel].ref; + + ea = get_tlb_eaddr(gtlbe); + sesel_base = htlb0_set_base(ea); kvmppc_e500_shadow_map(vcpu_e500, get_tlb_eaddr(gtlbe), get_tlb_raddr(gtlbe) >> PAGE_SHIFT, - gtlbe, 0, esel, stlbe); + gtlbe, 0, sesel_base + sesel, stlbe, ref); - return esel; + return sesel; } /* Caller must ensure that the specified guest TLB entry is safe to insert into @@ -658,14 +727,17 @@ static int kvmppc_e500_tlb0_map(struct kvmppc_vcpu_e500 *vcpu_e500, static int kvmppc_e500_tlb1_map(struct kvmppc_vcpu_e500 *vcpu_e500, u64 gvaddr, gfn_t gfn, struct tlbe *gtlbe, struct tlbe *stlbe) { + struct tlbe_ref *ref; unsigned int victim; - victim = vcpu_e500->gtlb_nv[1]++; + victim = vcpu_e500->host_tlb1_nv++; - if (unlikely(vcpu_e500->gtlb_nv[1] >= tlb1_max_shadow_size())) - vcpu_e500->gtlb_nv[1] = 0; + if (unlikely(vcpu_e500->host_tlb1_nv >= tlb1_max_shadow_size())) + vcpu_e500->host_tlb1_nv = 0; - kvmppc_e500_shadow_map(vcpu_e500, gvaddr, gfn, gtlbe, 1, victim, stlbe); + ref = &vcpu_e500->tlb_refs[1][victim]; + kvmppc_e500_shadow_map(vcpu_e500, gvaddr, gfn, gtlbe, 1, + victim, stlbe, ref); return victim; } @@ -792,7 +864,7 @@ int kvmppc_e500_emul_tlbsx(struct kvm_vcpu *vcpu, int rb) /* since we only have two TLBs, only lower bit is used. */ tlbsel = vcpu_e500->mas4 >> 28 & 0x1; - victim = (tlbsel == 0) ? tlb0_get_next_victim(vcpu_e500) : 0; + victim = (tlbsel == 0) ? gtlb0_get_next_victim(vcpu_e500) : 0; vcpu_e500->mas0 = MAS0_TLBSEL(tlbsel) | MAS0_ESEL(victim) | MAS0_NV(vcpu_e500->gtlb_nv[tlbsel]); @@ -839,7 +911,7 @@ int kvmppc_e500_emul_tlbwe(struct kvm_vcpu *vcpu) gtlbe = &vcpu_e500->gtlb_arch[tlbsel][esel]; if (get_tlb_v(gtlbe)) - kvmppc_e500_stlbe_invalidate(vcpu_e500, tlbsel, esel); + inval_gtlbe_on_host(vcpu_e500, tlbsel, esel); gtlbe->mas1 = vcpu_e500->mas1; gtlbe->mas2 = vcpu_e500->mas2; @@ -950,11 +1022,11 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 eaddr, gpa_t gpaddr, switch (tlbsel) { case 0: stlbsel = 0; - sesel = esel; - priv = &vcpu_e500->gtlb_priv[stlbsel][sesel]; + sesel = esel & (host_tlb_params[0].ways - 1); + priv = &vcpu_e500->gtlb_priv[tlbsel][esel]; kvmppc_e500_setup_stlbe(vcpu_e500, gtlbe, BOOK3E_PAGESZ_4K, - priv, eaddr, &stlbe); + &priv->ref, eaddr, &stlbe); break; case 1: { @@ -1020,32 +1092,76 @@ void kvmppc_e500_tlb_setup(struct kvmppc_vcpu_e500 *vcpu_e500) int kvmppc_e500_tlb_init(struct kvmppc_vcpu_e500 *vcpu_e500) { - tlb1_entry_num = mfspr(SPRN_TLB1CFG) & 0xFFF; + host_tlb_params[0].entries = mfspr(SPRN_TLB0CFG) & TLBnCFG_N_ENTRY; + host_tlb_params[1].entries = mfspr(SPRN_TLB1CFG) & TLBnCFG_N_ENTRY; + + /* + * This should never happen on real e500 hardware, but is + * architecturally possible -- e.g. in some weird nested + * virtualization case. + */ + if (host_tlb_params[0].entries == 0 || + host_tlb_params[1].entries == 0) { + pr_err("%s: need to know host tlb size\n", __func__); + return -ENODEV; + } + + host_tlb_params[0].ways = (mfspr(SPRN_TLB0CFG) & TLBnCFG_ASSOC) >> + TLBnCFG_ASSOC_SHIFT; + host_tlb_params[1].ways = host_tlb_params[1].entries; + + if (!is_power_of_2(host_tlb_params[0].entries) || + !is_power_of_2(host_tlb_params[0].ways) || + host_tlb_params[0].entries < host_tlb_params[0].ways || + host_tlb_params[0].ways == 0) { + pr_err("%s: bad tlb0 host config: %u entries %u ways\n", + __func__, host_tlb_params[0].entries, + host_tlb_params[0].ways); + return -ENODEV; + } + + host_tlb_params[0].sets = + host_tlb_params[0].entries / host_tlb_params[0].ways; + host_tlb_params[1].sets = 1; vcpu_e500->gtlb_size[0] = KVM_E500_TLB0_SIZE; vcpu_e500->gtlb_arch[0] = kzalloc(sizeof(struct tlbe) * KVM_E500_TLB0_SIZE, GFP_KERNEL); if (vcpu_e500->gtlb_arch[0] == NULL) - goto err_out; + goto err; vcpu_e500->gtlb_size[1] = KVM_E500_TLB1_SIZE; vcpu_e500->gtlb_arch[1] = kzalloc(sizeof(struct tlbe) * KVM_E500_TLB1_SIZE, GFP_KERNEL); if (vcpu_e500->gtlb_arch[1] == NULL) - goto err_out_guest0; - - vcpu_e500->gtlb_priv[0] = (struct tlbe_priv *) - kzalloc(sizeof(struct tlbe_priv) * KVM_E500_TLB0_SIZE, GFP_KERNEL); - if (vcpu_e500->gtlb_priv[0] == NULL) - goto err_out_guest1; - vcpu_e500->gtlb_priv[1] = (struct tlbe_priv *) - kzalloc(sizeof(struct tlbe_priv) * KVM_E500_TLB1_SIZE, GFP_KERNEL); - - if (vcpu_e500->gtlb_priv[1] == NULL) - goto err_out_priv0; + goto err; + + vcpu_e500->tlb_refs[0] = + kzalloc(sizeof(struct tlbe_ref) * host_tlb_params[0].entries, + GFP_KERNEL); + if (!vcpu_e500->tlb_refs[0]) + goto err; + + vcpu_e500->tlb_refs[1] = + kzalloc(sizeof(struct tlbe_ref) * host_tlb_params[1].entries, + GFP_KERNEL); + if (!vcpu_e500->tlb_refs[1]) + goto err; + + vcpu_e500->gtlb_priv[0] = + kzalloc(sizeof(struct tlbe_ref) * vcpu_e500->gtlb_size[0], + GFP_KERNEL); + if (!vcpu_e500->gtlb_priv[0]) + goto err; + + vcpu_e500->gtlb_priv[1] = + kzalloc(sizeof(struct tlbe_ref) * vcpu_e500->gtlb_size[1], + GFP_KERNEL); + if (!vcpu_e500->gtlb_priv[1]) + goto err; if (kvmppc_e500_id_table_alloc(vcpu_e500) == NULL) - goto err_out_priv1; + goto err; /* Init TLB configuration register */ vcpu_e500->tlb0cfg = mfspr(SPRN_TLB0CFG) & ~0xfffUL; @@ -1055,31 +1171,26 @@ int kvmppc_e500_tlb_init(struct kvmppc_vcpu_e500 *vcpu_e500) return 0; -err_out_priv1: - kfree(vcpu_e500->gtlb_priv[1]); -err_out_priv0: +err: + kfree(vcpu_e500->tlb_refs[0]); + kfree(vcpu_e500->tlb_refs[1]); kfree(vcpu_e500->gtlb_priv[0]); -err_out_guest1: - kfree(vcpu_e500->gtlb_arch[1]); -err_out_guest0: + kfree(vcpu_e500->gtlb_priv[1]); kfree(vcpu_e500->gtlb_arch[0]); -err_out: + kfree(vcpu_e500->gtlb_arch[1]); return -1; } void kvmppc_e500_tlb_uninit(struct kvmppc_vcpu_e500 *vcpu_e500) { - int stlbsel, i; - - /* release all privs */ - for (stlbsel = 0; stlbsel < 2; stlbsel++) - for (i = 0; i < vcpu_e500->gtlb_size[stlbsel]; i++) { - struct tlbe_priv *priv = - &vcpu_e500->gtlb_priv[stlbsel][i]; - kvmppc_e500_priv_release(priv); - } + clear_tlb_refs(vcpu_e500); kvmppc_e500_id_table_free(vcpu_e500); + + kfree(vcpu_e500->tlb_refs[0]); + kfree(vcpu_e500->tlb_refs[1]); + kfree(vcpu_e500->gtlb_priv[0]); + kfree(vcpu_e500->gtlb_priv[1]); kfree(vcpu_e500->gtlb_arch[1]); kfree(vcpu_e500->gtlb_arch[0]); } diff --git a/arch/powerpc/kvm/e500_tlb.h b/arch/powerpc/kvm/e500_tlb.h index 59b88e9..b587f69 100644 --- a/arch/powerpc/kvm/e500_tlb.h +++ b/arch/powerpc/kvm/e500_tlb.h @@ -155,23 +155,6 @@ static inline unsigned int get_tlb_esel_bit( return (vcpu_e500->mas0 >> 16) & 0xfff; } -static inline unsigned int get_tlb_esel( - const struct kvmppc_vcpu_e500 *vcpu_e500, - int tlbsel) -{ - unsigned int esel = get_tlb_esel_bit(vcpu_e500); - - if (tlbsel == 0) { - esel &= KVM_E500_TLB0_WAY_NUM_MASK; - esel |= ((vcpu_e500->mas2 >> 12) & KVM_E500_TLB0_WAY_SIZE_MASK) - << KVM_E500_TLB0_WAY_NUM_BIT; - } else { - esel &= KVM_E500_TLB1_SIZE - 1; - } - - return esel; -} - static inline int tlbe_is_host_safe(const struct kvm_vcpu *vcpu, const struct tlbe *tlbe) {