Message ID | 1375355558-19187-6-git-send-email-Bharat.Bhushan@freescale.com (mailing list archive) |
---|---|
State | Superseded, archived |
Headers | show |
On 08/01/2013 07:12 PM, Bharat Bhushan wrote: > KVM need to lookup linux pte for getting TLB attributes (WIMGE). > This is similar to how book3s does. > This will be used in follow-up patches. > > Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com> > --- > v1->v2 > - This is a new change in this version > > arch/powerpc/include/asm/kvm_booke.h | 73 ++++++++++++++++++++++++++++++++++ > 1 files changed, 73 insertions(+), 0 deletions(-) > > diff --git a/arch/powerpc/include/asm/kvm_booke.h b/arch/powerpc/include/asm/kvm_booke.h > index d3c1eb3..903624d 100644 > --- a/arch/powerpc/include/asm/kvm_booke.h > +++ b/arch/powerpc/include/asm/kvm_booke.h > @@ -102,4 +102,77 @@ static inline ulong kvmppc_get_msr(struct kvm_vcpu *vcpu) > { > return vcpu->arch.shared->msr; > } > + > +/* > + * Lock and read a linux PTE. If it's present and writable, atomically > + * set dirty and referenced bits and return the PTE, otherwise return 0. > + */ > +static inline pte_t kvmppc_read_update_linux_pte(pte_t *p, int writing) > +{ > + pte_t pte; > + > +#ifdef PTE_ATOMIC_UPDATES > + pte_t tmp; > + /* wait until _PAGE_BUSY is clear then set it atomically */ > +#ifdef CONFIG_PPC64 > + __asm__ __volatile__ ( > + "1: ldarx %0,0,%3\n" > + " andi. %1,%0,%4\n" > + " bne- 1b\n" > + " ori %1,%0,%4\n" > + " stdcx. %1,0,%3\n" > + " bne- 1b" > + : "=&r" (pte), "=&r" (tmp), "=m" (*p) > + : "r" (p), "i" (_PAGE_BUSY) > + : "cc"); > +#else > + __asm__ __volatile__ ( > + "1: lwarx %0,0,%3\n" > + " andi. %1,%0,%4\n" > + " bne- 1b\n" > + " ori %1,%0,%4\n" > + " stwcx. %1,0,%3\n" > + " bne- 1b" > + : "=&r" (pte), "=&r" (tmp), "=m" (*p) > + : "r" (p), "i" (_PAGE_BUSY) > + : "cc"); > +#endif > +#else > + pte = pte_val(*p); > +#endif > + > + if (pte_present(pte)) { > + pte = pte_mkyoung(pte); > + if (writing && pte_write(pte)) > + pte = pte_mkdirty(pte); > + } > + > + *p = pte; /* clears _PAGE_BUSY */ > + > + return pte; > +} > + > +static inline pte_t lookup_linux_pte(pgd_t *pgdir, unsigned long hva, > + int writing, unsigned long *pte_sizep) Looks this function is as same as book3s, so why not improve that as common :) Tiejun > +{ > + pte_t *ptep; > + unsigned long ps = *pte_sizep; > + unsigned int shift; > + > + ptep = find_linux_pte_or_hugepte(pgdir, hva, &shift); > + if (!ptep) > + return __pte(0); > + if (shift) > + *pte_sizep = 1ul << shift; > + else > + *pte_sizep = PAGE_SIZE; > + > + if (ps > *pte_sizep) > + return __pte(0); > + if (!pte_present(*ptep)) > + return __pte(0); > + > + return kvmppc_read_update_linux_pte(ptep, writing); > +} > + > #endif /* __ASM_KVM_BOOKE_H__ */ >
On Thu, 2013-08-01 at 16:42 +0530, Bharat Bhushan wrote: > KVM need to lookup linux pte for getting TLB attributes (WIMGE). > This is similar to how book3s does. > This will be used in follow-up patches. > > Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com> > --- > v1->v2 > - This is a new change in this version > > arch/powerpc/include/asm/kvm_booke.h | 73 ++++++++++++++++++++++++++++++++++ > 1 files changed, 73 insertions(+), 0 deletions(-) > > diff --git a/arch/powerpc/include/asm/kvm_booke.h b/arch/powerpc/include/asm/kvm_booke.h > index d3c1eb3..903624d 100644 > --- a/arch/powerpc/include/asm/kvm_booke.h > +++ b/arch/powerpc/include/asm/kvm_booke.h > @@ -102,4 +102,77 @@ static inline ulong kvmppc_get_msr(struct kvm_vcpu *vcpu) > { > return vcpu->arch.shared->msr; > } > + > +/* > + * Lock and read a linux PTE. If it's present and writable, atomically > + * set dirty and referenced bits and return the PTE, otherwise return 0. > + */ > +static inline pte_t kvmppc_read_update_linux_pte(pte_t *p, int writing) > +{ > + pte_t pte; > + > +#ifdef PTE_ATOMIC_UPDATES > + pte_t tmp; > + /* wait until _PAGE_BUSY is clear then set it atomically */ _PAGE_BUSY is 0 on book3e. > +#ifdef CONFIG_PPC64 > + __asm__ __volatile__ ( > + "1: ldarx %0,0,%3\n" > + " andi. %1,%0,%4\n" > + " bne- 1b\n" > + " ori %1,%0,%4\n" > + " stdcx. %1,0,%3\n" > + " bne- 1b" > + : "=&r" (pte), "=&r" (tmp), "=m" (*p) > + : "r" (p), "i" (_PAGE_BUSY) > + : "cc"); > +#else > + __asm__ __volatile__ ( > + "1: lwarx %0,0,%3\n" > + " andi. %1,%0,%4\n" > + " bne- 1b\n" > + " ori %1,%0,%4\n" > + " stwcx. %1,0,%3\n" > + " bne- 1b" > + : "=&r" (pte), "=&r" (tmp), "=m" (*p) > + : "r" (p), "i" (_PAGE_BUSY) > + : "cc"); > +#endif What about 64-bit PTEs on 32-bit kernels? In any case, this code does not belong in KVM. It should be in the main PPC mm code, even if KVM is the only user. -Scott
On Fri, 2013-08-02 at 17:58 -0500, Scott Wood wrote: > > What about 64-bit PTEs on 32-bit kernels? > > In any case, this code does not belong in KVM. It should be in the > main > PPC mm code, even if KVM is the only user. Also don't we do similar things in BookS KVM ? At the very least that sutff should become common. And yes, I agree, it should probably also move to pgtable* Cheers, Ben.
> -----Original Message----- > From: Benjamin Herrenschmidt [mailto:benh@kernel.crashing.org] > Sent: Saturday, August 03, 2013 4:47 AM > To: Wood Scott-B07421 > Cc: Bhushan Bharat-R65777; agraf@suse.de; kvm-ppc@vger.kernel.org; > kvm@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; Bhushan Bharat-R65777 > Subject: Re: [PATCH 5/6 v2] kvm: powerpc: booke: Add linux pte lookup like > booke3s > > On Fri, 2013-08-02 at 17:58 -0500, Scott Wood wrote: > > > > What about 64-bit PTEs on 32-bit kernels? > > > > In any case, this code does not belong in KVM. It should be in the > > main PPC mm code, even if KVM is the only user. > > Also don't we do similar things in BookS KVM ? At the very least that sutff > should become common. And yes, I agree, it should probably also move to pgtable* One of the problem I saw was that if I put this code in asm/pgtable-32.h and asm/pgtable-64.h then pte_persent() and other friend function (on which this code depends) are defined in pgtable.h. And pgtable.h includes asm/pgtable-32.h and asm/pgtable-64.h before it defines pte_present() and friends functions. Ok I move wove this in asm/pgtable*.h, initially I fought with myself to take this code in pgtable* but finally end up doing here (got biased by book3s :)). Thanks -Bharat > > Cheers, > Ben. > >
On Sat, 2013-08-03 at 02:58 +0000, Bhushan Bharat-R65777 wrote: > One of the problem I saw was that if I put this code in > asm/pgtable-32.h and asm/pgtable-64.h then pte_persent() and other > friend function (on which this code depends) are defined in pgtable.h. > And pgtable.h includes asm/pgtable-32.h and asm/pgtable-64.h before it > defines pte_present() and friends functions. > > Ok I move wove this in asm/pgtable*.h, initially I fought with myself > to take this code in pgtable* but finally end up doing here (got > biased by book3s :)). Is there a reason why these routines can not be completely generic in pgtable.h ? Ben.
diff --git a/arch/powerpc/include/asm/kvm_booke.h b/arch/powerpc/include/asm/kvm_booke.h index d3c1eb3..903624d 100644 --- a/arch/powerpc/include/asm/kvm_booke.h +++ b/arch/powerpc/include/asm/kvm_booke.h @@ -102,4 +102,77 @@ static inline ulong kvmppc_get_msr(struct kvm_vcpu *vcpu) { return vcpu->arch.shared->msr; } + +/* + * Lock and read a linux PTE. If it's present and writable, atomically + * set dirty and referenced bits and return the PTE, otherwise return 0. + */ +static inline pte_t kvmppc_read_update_linux_pte(pte_t *p, int writing) +{ + pte_t pte; + +#ifdef PTE_ATOMIC_UPDATES + pte_t tmp; + /* wait until _PAGE_BUSY is clear then set it atomically */ +#ifdef CONFIG_PPC64 + __asm__ __volatile__ ( + "1: ldarx %0,0,%3\n" + " andi. %1,%0,%4\n" + " bne- 1b\n" + " ori %1,%0,%4\n" + " stdcx. %1,0,%3\n" + " bne- 1b" + : "=&r" (pte), "=&r" (tmp), "=m" (*p) + : "r" (p), "i" (_PAGE_BUSY) + : "cc"); +#else + __asm__ __volatile__ ( + "1: lwarx %0,0,%3\n" + " andi. %1,%0,%4\n" + " bne- 1b\n" + " ori %1,%0,%4\n" + " stwcx. %1,0,%3\n" + " bne- 1b" + : "=&r" (pte), "=&r" (tmp), "=m" (*p) + : "r" (p), "i" (_PAGE_BUSY) + : "cc"); +#endif +#else + pte = pte_val(*p); +#endif + + if (pte_present(pte)) { + pte = pte_mkyoung(pte); + if (writing && pte_write(pte)) + pte = pte_mkdirty(pte); + } + + *p = pte; /* clears _PAGE_BUSY */ + + return pte; +} + +static inline pte_t lookup_linux_pte(pgd_t *pgdir, unsigned long hva, + int writing, unsigned long *pte_sizep) +{ + pte_t *ptep; + unsigned long ps = *pte_sizep; + unsigned int shift; + + ptep = find_linux_pte_or_hugepte(pgdir, hva, &shift); + if (!ptep) + return __pte(0); + if (shift) + *pte_sizep = 1ul << shift; + else + *pte_sizep = PAGE_SIZE; + + if (ps > *pte_sizep) + return __pte(0); + if (!pte_present(*ptep)) + return __pte(0); + + return kvmppc_read_update_linux_pte(ptep, writing); +} + #endif /* __ASM_KVM_BOOKE_H__ */
KVM need to lookup linux pte for getting TLB attributes (WIMGE). This is similar to how book3s does. This will be used in follow-up patches. Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com> --- v1->v2 - This is a new change in this version arch/powerpc/include/asm/kvm_booke.h | 73 ++++++++++++++++++++++++++++++++++ 1 files changed, 73 insertions(+), 0 deletions(-)