Message ID | 1480385626.11342.53.camel@kernel.crashing.org (mailing list archive) |
---|---|
State | Accepted |
Headers | show |
Benjamin Herrenschmidt <benh@kernel.crashing.org> writes: > On 64-bit CPUs with no-execute support and non-snooping icache, such as > 970 or POWER4, we have a software mechanism to ensure coherency of the > cache (using exec faults when needed). > > This was broken due to a logic inversion when that code was rewritten > from assembly to C. The asm code for reference is BEGIN_FTR_SECTION mr r4,r30 mr r5,r7 bl hash_page_do_lazy_icache END_FTR_SECTION(CPU_FTR_NOEXECUTE|CPU_FTR_COHERENT_ICACHE, CPU_FTR_NOEXECUTE) Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> > > Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> > Fixes: 91f1da99792a1d133df94c4753510305353064a1 > Fixes: 89ff725051d177556b23d80f2a30f880a657a6c1 > Fixes: a43c0eb8364c022725df586e91dd753633374d66 > -- > diff --git a/arch/powerpc/mm/hash64_4k.c b/arch/powerpc/mm/hash64_4k.c > index 42c702b..6fa450c 100644 > --- a/arch/powerpc/mm/hash64_4k.c > +++ b/arch/powerpc/mm/hash64_4k.c > @@ -55,7 +55,7 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, > */ > rflags = htab_convert_pte_flags(new_pte); > > - if (!cpu_has_feature(CPU_FTR_NOEXECUTE) && > + if (cpu_has_feature(CPU_FTR_NOEXECUTE) && > !cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) > rflags = hash_page_do_lazy_icache(rflags, __pte(old_pte), trap); > > diff --git a/arch/powerpc/mm/hash64_64k.c b/arch/powerpc/mm/hash64_64k.c > index 3bbbea0..1a68cb1 100644 > --- a/arch/powerpc/mm/hash64_64k.c > +++ b/arch/powerpc/mm/hash64_64k.c > @@ -87,7 +87,7 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, > subpg_pte = new_pte & ~subpg_prot; > rflags = htab_convert_pte_flags(subpg_pte); > > - if (!cpu_has_feature(CPU_FTR_NOEXECUTE) && > + if (cpu_has_feature(CPU_FTR_NOEXECUTE) && > !cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) { > > /* > @@ -258,7 +258,7 @@ int __hash_page_64K(unsigned long ea, unsigned long access, > > rflags = htab_convert_pte_flags(new_pte); > > - if (!cpu_has_feature(CPU_FTR_NOEXECUTE) && > + if (cpu_has_feature(CPU_FTR_NOEXECUTE) && > !cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) > rflags = hash_page_do_lazy_icache(rflags, __pte(old_pte), trap);
On Tue, 2016-11-29 at 02:13:46 UTC, Benjamin Herrenschmidt wrote: > On 64-bit CPUs with no-execute support and non-snooping icache, such as > 970 or POWER4, we have a software mechanism to ensure coherency of the > cache (using exec faults when needed). > > This was broken due to a logic inversion when that code was rewritten > from assembly to C. > > Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> > Fixes: 91f1da99792a1d133df94c4753510305353064a1 > Fixes: 89ff725051d177556b23d80f2a30f880a657a6c1 > Fixes: a43c0eb8364c022725df586e91dd753633374d66 > Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Applied to powerpc fixes, thanks. https://git.kernel.org/powerpc/c/dd7b2f035ec41a409f7a7cec7aabc0 cheers
diff --git a/arch/powerpc/mm/hash64_4k.c b/arch/powerpc/mm/hash64_4k.c index 42c702b..6fa450c 100644 --- a/arch/powerpc/mm/hash64_4k.c +++ b/arch/powerpc/mm/hash64_4k.c @@ -55,7 +55,7 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, */ rflags = htab_convert_pte_flags(new_pte); - if (!cpu_has_feature(CPU_FTR_NOEXECUTE) && + if (cpu_has_feature(CPU_FTR_NOEXECUTE) && !cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) rflags = hash_page_do_lazy_icache(rflags, __pte(old_pte), trap); diff --git a/arch/powerpc/mm/hash64_64k.c b/arch/powerpc/mm/hash64_64k.c index 3bbbea0..1a68cb1 100644 --- a/arch/powerpc/mm/hash64_64k.c +++ b/arch/powerpc/mm/hash64_64k.c @@ -87,7 +87,7 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, subpg_pte = new_pte & ~subpg_prot; rflags = htab_convert_pte_flags(subpg_pte); - if (!cpu_has_feature(CPU_FTR_NOEXECUTE) && + if (cpu_has_feature(CPU_FTR_NOEXECUTE) && !cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) { /* @@ -258,7 +258,7 @@ int __hash_page_64K(unsigned long ea, unsigned long access, rflags = htab_convert_pte_flags(new_pte); - if (!cpu_has_feature(CPU_FTR_NOEXECUTE) && + if (cpu_has_feature(CPU_FTR_NOEXECUTE) && !cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) rflags = hash_page_do_lazy_icache(rflags, __pte(old_pte), trap);
On 64-bit CPUs with no-execute support and non-snooping icache, such as 970 or POWER4, we have a software mechanism to ensure coherency of the cache (using exec faults when needed). This was broken due to a logic inversion when that code was rewritten from assembly to C. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Fixes: 91f1da99792a1d133df94c4753510305353064a1 Fixes: 89ff725051d177556b23d80f2a30f880a657a6c1 Fixes: a43c0eb8364c022725df586e91dd753633374d66 --