Message ID | 20231102132327.294050-1-aneesh.kumar@linux.ibm.com (mailing list archive) |
---|---|
State | RFC |
Headers | show |
Series | [RFC] powerpc/book3s/hash: Drop _PAGE_PRIVILEGED from PAGE_NONE | expand |
On Thu Nov 2, 2023 at 11:23 PM AEST, Aneesh Kumar K.V wrote: > There used to be a dependency on _PAGE_PRIVILEGED with pte_savedwrite. > But that got dropped by > commit 6a56ccbcf6c6 ("mm/autonuma: use can_change_(pte|pmd)_writable() to replace savedwrite") > > With this change numa fault pte (pte_protnone()) gets mapped as regular user pte > with RWX cleared (no-access). You mean "that" above change (not *this* change), right? > This also remove pte_user() from > book3s/64. Nice cleanup. That was an annoying hack. > pte_access_permitted() now checks for _PAGE_EXEC because we now support > EXECONLY mappings. AFAIKS pte_exec() is not required, GUP is really only for read or write access. It should be a separate patch if you think it's needed. > > Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> > --- > arch/powerpc/include/asm/book3s/64/pgtable.h | 23 +++++--------------- > arch/powerpc/mm/book3s64/hash_utils.c | 17 +++++++++++++++ > 2 files changed, 23 insertions(+), 17 deletions(-) > > diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h > index cb77eddca54b..7c7de7b56df0 100644 > --- a/arch/powerpc/include/asm/book3s/64/pgtable.h > +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h > @@ -17,12 +17,6 @@ > #define _PAGE_EXEC 0x00001 /* execute permission */ > #define _PAGE_WRITE 0x00002 /* write access allowed */ > #define _PAGE_READ 0x00004 /* read access allowed */ > -#define _PAGE_NA _PAGE_PRIVILEGED > -#define _PAGE_NAX _PAGE_EXEC > -#define _PAGE_RO _PAGE_READ > -#define _PAGE_ROX (_PAGE_READ | _PAGE_EXEC) > -#define _PAGE_RW (_PAGE_READ | _PAGE_WRITE) > -#define _PAGE_RWX (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC) > #define _PAGE_PRIVILEGED 0x00008 /* kernel access only */ > #define _PAGE_SAO 0x00010 /* Strong access order */ > #define _PAGE_NON_IDEMPOTENT 0x00020 /* non idempotent memory */ Did you leave PAGE_NONE as _PAGE_BASE | _PAGE_PRIVILEGED below? Shouldn't that be changed too? Then this patch is not only hash but also radix. Why is the hash change required? Previously PAGE_NONE relied on privileged bit to prevent access, now you need to handle a PTE without that bit? In that case could that be patch 1, then the rest patch 2? __pte_flags_need_flush() should be updated after this too, basically revert commit 1abce0580b894. > @@ -119,9 +113,9 @@ > /* > * user access blocked by key > */ > -#define _PAGE_KERNEL_RW (_PAGE_PRIVILEGED | _PAGE_RW | _PAGE_DIRTY) > #define _PAGE_KERNEL_RO (_PAGE_PRIVILEGED | _PAGE_READ) > #define _PAGE_KERNEL_ROX (_PAGE_PRIVILEGED | _PAGE_READ | _PAGE_EXEC) > +#define _PAGE_KERNEL_RW (_PAGE_PRIVILEGED | _PAGE_RW | _PAGE_DIRTY) > #define _PAGE_KERNEL_RWX (_PAGE_PRIVILEGED | _PAGE_DIRTY | _PAGE_RW | _PAGE_EXEC) > /* > * _PAGE_CHG_MASK masks of bits that are to be preserved across No need to reorder defines. Thanks, Nick > @@ -523,19 +517,14 @@ static inline bool arch_pte_access_permitted(u64 pte, bool write, bool execute) > } > #endif /* CONFIG_PPC_MEM_KEYS */ > > -static inline bool pte_user(pte_t pte) > -{ > - return !(pte_raw(pte) & cpu_to_be64(_PAGE_PRIVILEGED)); > -} > - > #define pte_access_permitted pte_access_permitted > static inline bool pte_access_permitted(pte_t pte, bool write) > { > - /* > - * _PAGE_READ is needed for any access and will be > - * cleared for PROT_NONE > - */ > - if (!pte_present(pte) || !pte_user(pte) || !pte_read(pte)) > + > + if (!pte_present(pte)) > + return false; > + > + if (!(pte_read(pte) || pte_exec(pte))) > return false; > > if (write && !pte_write(pte)) > diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c > index ad2afa08e62e..b2eda22195f0 100644 > --- a/arch/powerpc/mm/book3s64/hash_utils.c > +++ b/arch/powerpc/mm/book3s64/hash_utils.c > @@ -310,9 +310,26 @@ unsigned long htab_convert_pte_flags(unsigned long pteflags, unsigned long flags > else > rflags |= 0x3; > } > + WARN_ON(!(pteflags & _PAGE_RWX)); > } else { > if (pteflags & _PAGE_RWX) > rflags |= 0x2; > + else { > + /* > + * PAGE_NONE will get mapped to 0b110 (slb key 1 no access) > + * We picked 0b110 instead of 0b000 so that slb key 0 will > + * get only read only access for the same rflags. > + */ > + if (mmu_has_feature(MMU_FTR_KERNEL_RO)) > + rflags |= (HPTE_R_PP0 | 0x2); > + /* > + * rflags = HPTE_R_N > + * Without KERNEL_RO feature this will result in slb > + * key 0 with read/write. But ISA only supports that. > + * There is no key 1 no-access and key 0 read-only > + * pp bit support. > + */ > + } > if (!((pteflags & _PAGE_WRITE) && (pteflags & _PAGE_DIRTY))) > rflags |= 0x1; > }
On 11/13/23 3:46 PM, Nicholas Piggin wrote: > On Thu Nov 2, 2023 at 11:23 PM AEST, Aneesh Kumar K.V wrote: >> There used to be a dependency on _PAGE_PRIVILEGED with pte_savedwrite. >> But that got dropped by >> commit 6a56ccbcf6c6 ("mm/autonuma: use can_change_(pte|pmd)_writable() to replace savedwrite") >> >> With this change numa fault pte (pte_protnone()) gets mapped as regular user pte >> with RWX cleared (no-access). > > You mean "that" above change (not *this* change), right? > With the change in this patch numa fault pte will not have _PAGE_PRIVILEGED set because PAGE_NONE now maps to just _PAGE_BASE >> This also remove pte_user() from >> book3s/64. > > Nice cleanup. That was an annoying hack. > >> pte_access_permitted() now checks for _PAGE_EXEC because we now support >> EXECONLY mappings. > > AFAIKS pte_exec() is not required, GUP is really only for read or > write access. It should be a separate patch if you think it's needed. > I have a v2 dropping that based on https://lore.kernel.org/linux-mm/87bkc1oe8c.fsf@linux.ibm.com I kept pte_user with pte_access_permitted being the only user. I can open code that if needed. >> >> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> >> --- >> arch/powerpc/include/asm/book3s/64/pgtable.h | 23 +++++--------------- >> arch/powerpc/mm/book3s64/hash_utils.c | 17 +++++++++++++++ >> 2 files changed, 23 insertions(+), 17 deletions(-) >> >> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h >> index cb77eddca54b..7c7de7b56df0 100644 >> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h >> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h >> @@ -17,12 +17,6 @@ >> #define _PAGE_EXEC 0x00001 /* execute permission */ >> #define _PAGE_WRITE 0x00002 /* write access allowed */ >> #define _PAGE_READ 0x00004 /* read access allowed */ >> -#define _PAGE_NA _PAGE_PRIVILEGED >> -#define _PAGE_NAX _PAGE_EXEC >> -#define _PAGE_RO _PAGE_READ >> -#define _PAGE_ROX (_PAGE_READ | _PAGE_EXEC) >> -#define _PAGE_RW (_PAGE_READ | _PAGE_WRITE) >> -#define _PAGE_RWX (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC) >> #define _PAGE_PRIVILEGED 0x00008 /* kernel access only */ >> #define _PAGE_SAO 0x00010 /* Strong access order */ >> #define _PAGE_NON_IDEMPOTENT 0x00020 /* non idempotent memory */ > > Did you leave PAGE_NONE as _PAGE_BASE | _PAGE_PRIVILEGED below? > Shouldn't that be changed too? Then this patch is not only hash > but also radix. > A recent patch moved PAGE_NONE to pgtable-mask.h a5a08dc90f4513d1a78582ec24b687fad01cc843 > Why is the hash change required? Previously PAGE_NONE relied on > privileged bit to prevent access, now you need to handle a PTE > without that bit? In that case could that be patch 1, then the > rest patch 2? > Looking at older kernel, I guess check_pte_access used _PAGE_PRIVILEGED as a way to prevent access to PAGE_NONE ptes. We now depend on _PAGE_READ > __pte_flags_need_flush() should be updated after this too, > basically revert commit 1abce0580b894. > Will update the patch to include the revert. >> @@ -119,9 +113,9 @@ >> /* >> * user access blocked by key >> */ >> -#define _PAGE_KERNEL_RW (_PAGE_PRIVILEGED | _PAGE_RW | _PAGE_DIRTY) >> #define _PAGE_KERNEL_RO (_PAGE_PRIVILEGED | _PAGE_READ) >> #define _PAGE_KERNEL_ROX (_PAGE_PRIVILEGED | _PAGE_READ | _PAGE_EXEC) >> +#define _PAGE_KERNEL_RW (_PAGE_PRIVILEGED | _PAGE_RW | _PAGE_DIRTY) >> #define _PAGE_KERNEL_RWX (_PAGE_PRIVILEGED | _PAGE_DIRTY | _PAGE_RW | _PAGE_EXEC) >> /* >> * _PAGE_CHG_MASK masks of bits that are to be preserved across > > No need to reorder defines. > ok > Thanks, > Nick > >> @@ -523,19 +517,14 @@ static inline bool arch_pte_access_permitted(u64 pte, bool write, bool execute) >> } >> #endif /* CONFIG_PPC_MEM_KEYS */ >> >> -static inline bool pte_user(pte_t pte) >> -{ >> - return !(pte_raw(pte) & cpu_to_be64(_PAGE_PRIVILEGED)); >> -} >> - >> #define pte_access_permitted pte_access_permitted >> static inline bool pte_access_permitted(pte_t pte, bool write) >> { >> - /* >> - * _PAGE_READ is needed for any access and will be >> - * cleared for PROT_NONE >> - */ >> - if (!pte_present(pte) || !pte_user(pte) || !pte_read(pte)) >> + >> + if (!pte_present(pte)) >> + return false; >> + >> + if (!(pte_read(pte) || pte_exec(pte))) >> return false; >> >> if (write && !pte_write(pte)) >> diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c >> index ad2afa08e62e..b2eda22195f0 100644 >> --- a/arch/powerpc/mm/book3s64/hash_utils.c >> +++ b/arch/powerpc/mm/book3s64/hash_utils.c >> @@ -310,9 +310,26 @@ unsigned long htab_convert_pte_flags(unsigned long pteflags, unsigned long flags >> else >> rflags |= 0x3; >> } >> + WARN_ON(!(pteflags & _PAGE_RWX)); >> } else { >> if (pteflags & _PAGE_RWX) >> rflags |= 0x2; >> + else { >> + /* >> + * PAGE_NONE will get mapped to 0b110 (slb key 1 no access) >> + * We picked 0b110 instead of 0b000 so that slb key 0 will >> + * get only read only access for the same rflags. >> + */ >> + if (mmu_has_feature(MMU_FTR_KERNEL_RO)) >> + rflags |= (HPTE_R_PP0 | 0x2); >> + /* >> + * rflags = HPTE_R_N >> + * Without KERNEL_RO feature this will result in slb >> + * key 0 with read/write. But ISA only supports that. >> + * There is no key 1 no-access and key 0 read-only >> + * pp bit support. >> + */ >> + } >> if (!((pteflags & _PAGE_WRITE) && (pteflags & _PAGE_DIRTY))) >> rflags |= 0x1; >> } > V2 is also dropping the above change, because we will never have hash table entries inserted. This is added to commit message. Hash fault handling code did get some WARN_ON added because those functions are not expected to get called with _PAGE_READ cleared. commit 18061c17c8ec ("powerpc/mm: Update PROTFAULT handling in the page fault path") explains the details. -aneesh
On Mon Nov 13, 2023 at 8:45 PM AEST, Aneesh Kumar K V wrote: > On 11/13/23 3:46 PM, Nicholas Piggin wrote: > > On Thu Nov 2, 2023 at 11:23 PM AEST, Aneesh Kumar K.V wrote: > >> There used to be a dependency on _PAGE_PRIVILEGED with pte_savedwrite. > >> But that got dropped by > >> commit 6a56ccbcf6c6 ("mm/autonuma: use can_change_(pte|pmd)_writable() to replace savedwrite") > >> > >> With this change numa fault pte (pte_protnone()) gets mapped as regular user pte > >> with RWX cleared (no-access). > > > > You mean "that" above change (not *this* change), right? > > > > With the change in this patch numa fault pte will not have _PAGE_PRIVILEGED set because > PAGE_NONE now maps to just _PAGE_BASE Right, I guess I was confused because I missed the pgtable-mask.h move. > >> This also remove pte_user() from > >> book3s/64. > > > > Nice cleanup. That was an annoying hack. > > > >> pte_access_permitted() now checks for _PAGE_EXEC because we now support > >> EXECONLY mappings. > > > > AFAIKS pte_exec() is not required, GUP is really only for read or > > write access. It should be a separate patch if you think it's needed. > > > > I have a v2 dropping that based on https://lore.kernel.org/linux-mm/87bkc1oe8c.fsf@linux.ibm.com > I kept pte_user with pte_access_permitted being the only user. I can open code that > if needed. I don't mind keeping pte_user() and the check in pte_access_permitted(). > >> > >> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> > >> --- > >> arch/powerpc/include/asm/book3s/64/pgtable.h | 23 +++++--------------- > >> arch/powerpc/mm/book3s64/hash_utils.c | 17 +++++++++++++++ > >> 2 files changed, 23 insertions(+), 17 deletions(-) > >> > >> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h > >> index cb77eddca54b..7c7de7b56df0 100644 > >> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h > >> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h > >> @@ -17,12 +17,6 @@ > >> #define _PAGE_EXEC 0x00001 /* execute permission */ > >> #define _PAGE_WRITE 0x00002 /* write access allowed */ > >> #define _PAGE_READ 0x00004 /* read access allowed */ > >> -#define _PAGE_NA _PAGE_PRIVILEGED > >> -#define _PAGE_NAX _PAGE_EXEC > >> -#define _PAGE_RO _PAGE_READ > >> -#define _PAGE_ROX (_PAGE_READ | _PAGE_EXEC) > >> -#define _PAGE_RW (_PAGE_READ | _PAGE_WRITE) > >> -#define _PAGE_RWX (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC) > >> #define _PAGE_PRIVILEGED 0x00008 /* kernel access only */ > >> #define _PAGE_SAO 0x00010 /* Strong access order */ > >> #define _PAGE_NON_IDEMPOTENT 0x00020 /* non idempotent memory */ > > > > Did you leave PAGE_NONE as _PAGE_BASE | _PAGE_PRIVILEGED below? > > Shouldn't that be changed too? Then this patch is not only hash > > but also radix. > > > > A recent patch moved PAGE_NONE to pgtable-mask.h > a5a08dc90f4513d1a78582ec24b687fad01cc843 Thanks I did miss it. > > > Why is the hash change required? Previously PAGE_NONE relied on > > privileged bit to prevent access, now you need to handle a PTE > > without that bit? In that case could that be patch 1, then the > > rest patch 2? > > > > Looking at older kernel, I guess check_pte_access used _PAGE_PRIVILEGED > as a way to prevent access to PAGE_NONE ptes. We now depend on > _PAGE_READ > > > __pte_flags_need_flush() should be updated after this too, > > basically revert commit 1abce0580b894. > > > > Will update the patch to include the revert. > > >> @@ -119,9 +113,9 @@ > >> /* > >> * user access blocked by key > >> */ > >> -#define _PAGE_KERNEL_RW (_PAGE_PRIVILEGED | _PAGE_RW | _PAGE_DIRTY) > >> #define _PAGE_KERNEL_RO (_PAGE_PRIVILEGED | _PAGE_READ) > >> #define _PAGE_KERNEL_ROX (_PAGE_PRIVILEGED | _PAGE_READ | _PAGE_EXEC) > >> +#define _PAGE_KERNEL_RW (_PAGE_PRIVILEGED | _PAGE_RW | _PAGE_DIRTY) > >> #define _PAGE_KERNEL_RWX (_PAGE_PRIVILEGED | _PAGE_DIRTY | _PAGE_RW | _PAGE_EXEC) > >> /* > >> * _PAGE_CHG_MASK masks of bits that are to be preserved across > > > > No need to reorder defines. > > > > ok > > > > Thanks, > > Nick > > > >> @@ -523,19 +517,14 @@ static inline bool arch_pte_access_permitted(u64 pte, bool write, bool execute) > >> } > >> #endif /* CONFIG_PPC_MEM_KEYS */ > >> > >> -static inline bool pte_user(pte_t pte) > >> -{ > >> - return !(pte_raw(pte) & cpu_to_be64(_PAGE_PRIVILEGED)); > >> -} > >> - > >> #define pte_access_permitted pte_access_permitted > >> static inline bool pte_access_permitted(pte_t pte, bool write) > >> { > >> - /* > >> - * _PAGE_READ is needed for any access and will be > >> - * cleared for PROT_NONE > >> - */ > >> - if (!pte_present(pte) || !pte_user(pte) || !pte_read(pte)) > >> + > >> + if (!pte_present(pte)) > >> + return false; > >> + > >> + if (!(pte_read(pte) || pte_exec(pte))) > >> return false; > >> > >> if (write && !pte_write(pte)) > >> diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c > >> index ad2afa08e62e..b2eda22195f0 100644 > >> --- a/arch/powerpc/mm/book3s64/hash_utils.c > >> +++ b/arch/powerpc/mm/book3s64/hash_utils.c > >> @@ -310,9 +310,26 @@ unsigned long htab_convert_pte_flags(unsigned long pteflags, unsigned long flags > >> else > >> rflags |= 0x3; > >> } > >> + WARN_ON(!(pteflags & _PAGE_RWX)); > >> } else { > >> if (pteflags & _PAGE_RWX) > >> rflags |= 0x2; > >> + else { > >> + /* > >> + * PAGE_NONE will get mapped to 0b110 (slb key 1 no access) > >> + * We picked 0b110 instead of 0b000 so that slb key 0 will > >> + * get only read only access for the same rflags. > >> + */ > >> + if (mmu_has_feature(MMU_FTR_KERNEL_RO)) > >> + rflags |= (HPTE_R_PP0 | 0x2); > >> + /* > >> + * rflags = HPTE_R_N > >> + * Without KERNEL_RO feature this will result in slb > >> + * key 0 with read/write. But ISA only supports that. > >> + * There is no key 1 no-access and key 0 read-only > >> + * pp bit support. > >> + */ > >> + } > >> if (!((pteflags & _PAGE_WRITE) && (pteflags & _PAGE_DIRTY))) > >> rflags |= 0x1; > >> } > > > > V2 is also dropping the above change, because we will never have hash table entries inserted. > > This is added to commit message. > > Hash fault handling code did get some WARN_ON added because those > functions are not expected to get called with _PAGE_READ cleared. > commit 18061c17c8ec ("powerpc/mm: Update PROTFAULT handling in the page fault path") > explains the details. Should it be a WARN_ON_ONCE? Any useful way to recover from it? Could the added WARN come with some comments from that commit that explain it? Thanks, Nick
On 11/13/23 5:17 PM, Nicholas Piggin wrote: > On Mon Nov 13, 2023 at 8:45 PM AEST, Aneesh Kumar K V wrote: .... >>>> diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c >>>> index ad2afa08e62e..b2eda22195f0 100644 >>>> --- a/arch/powerpc/mm/book3s64/hash_utils.c >>>> +++ b/arch/powerpc/mm/book3s64/hash_utils.c >>>> @@ -310,9 +310,26 @@ unsigned long htab_convert_pte_flags(unsigned long pteflags, unsigned long flags >>>> else >>>> rflags |= 0x3; >>>> } >>>> + WARN_ON(!(pteflags & _PAGE_RWX)); >>>> } else { >>>> if (pteflags & _PAGE_RWX) >>>> rflags |= 0x2; >>>> + else { >>>> + /* >>>> + * PAGE_NONE will get mapped to 0b110 (slb key 1 no access) >>>> + * We picked 0b110 instead of 0b000 so that slb key 0 will >>>> + * get only read only access for the same rflags. >>>> + */ >>>> + if (mmu_has_feature(MMU_FTR_KERNEL_RO)) >>>> + rflags |= (HPTE_R_PP0 | 0x2); >>>> + /* >>>> + * rflags = HPTE_R_N >>>> + * Without KERNEL_RO feature this will result in slb >>>> + * key 0 with read/write. But ISA only supports that. >>>> + * There is no key 1 no-access and key 0 read-only >>>> + * pp bit support. >>>> + */ >>>> + } >>>> if (!((pteflags & _PAGE_WRITE) && (pteflags & _PAGE_DIRTY))) >>>> rflags |= 0x1; >>>> } >>> >> >> V2 is also dropping the above change, because we will never have hash table entries inserted. >> >> This is added to commit message. >> >> Hash fault handling code did get some WARN_ON added because those >> functions are not expected to get called with _PAGE_READ cleared. >> commit 18061c17c8ec ("powerpc/mm: Update PROTFAULT handling in the page fault path") >> explains the details. > > Should it be a WARN_ON_ONCE? Any useful way to recover from it? Could > the added WARN come with some comments from that commit that explain > it? > This should never happen unless someone messed up check_pte_access(). The WARN_ON() is a way to remind me not to add no access ppp mapping in the htab_convert_pte_flags() as I did above. Initially I was planning to add only a comment, but then in the rare case of we getting it wrong it is nice to do a console log. -aneesh
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index cb77eddca54b..7c7de7b56df0 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -17,12 +17,6 @@ #define _PAGE_EXEC 0x00001 /* execute permission */ #define _PAGE_WRITE 0x00002 /* write access allowed */ #define _PAGE_READ 0x00004 /* read access allowed */ -#define _PAGE_NA _PAGE_PRIVILEGED -#define _PAGE_NAX _PAGE_EXEC -#define _PAGE_RO _PAGE_READ -#define _PAGE_ROX (_PAGE_READ | _PAGE_EXEC) -#define _PAGE_RW (_PAGE_READ | _PAGE_WRITE) -#define _PAGE_RWX (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC) #define _PAGE_PRIVILEGED 0x00008 /* kernel access only */ #define _PAGE_SAO 0x00010 /* Strong access order */ #define _PAGE_NON_IDEMPOTENT 0x00020 /* non idempotent memory */ @@ -119,9 +113,9 @@ /* * user access blocked by key */ -#define _PAGE_KERNEL_RW (_PAGE_PRIVILEGED | _PAGE_RW | _PAGE_DIRTY) #define _PAGE_KERNEL_RO (_PAGE_PRIVILEGED | _PAGE_READ) #define _PAGE_KERNEL_ROX (_PAGE_PRIVILEGED | _PAGE_READ | _PAGE_EXEC) +#define _PAGE_KERNEL_RW (_PAGE_PRIVILEGED | _PAGE_RW | _PAGE_DIRTY) #define _PAGE_KERNEL_RWX (_PAGE_PRIVILEGED | _PAGE_DIRTY | _PAGE_RW | _PAGE_EXEC) /* * _PAGE_CHG_MASK masks of bits that are to be preserved across @@ -523,19 +517,14 @@ static inline bool arch_pte_access_permitted(u64 pte, bool write, bool execute) } #endif /* CONFIG_PPC_MEM_KEYS */ -static inline bool pte_user(pte_t pte) -{ - return !(pte_raw(pte) & cpu_to_be64(_PAGE_PRIVILEGED)); -} - #define pte_access_permitted pte_access_permitted static inline bool pte_access_permitted(pte_t pte, bool write) { - /* - * _PAGE_READ is needed for any access and will be - * cleared for PROT_NONE - */ - if (!pte_present(pte) || !pte_user(pte) || !pte_read(pte)) + + if (!pte_present(pte)) + return false; + + if (!(pte_read(pte) || pte_exec(pte))) return false; if (write && !pte_write(pte)) diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c index ad2afa08e62e..b2eda22195f0 100644 --- a/arch/powerpc/mm/book3s64/hash_utils.c +++ b/arch/powerpc/mm/book3s64/hash_utils.c @@ -310,9 +310,26 @@ unsigned long htab_convert_pte_flags(unsigned long pteflags, unsigned long flags else rflags |= 0x3; } + WARN_ON(!(pteflags & _PAGE_RWX)); } else { if (pteflags & _PAGE_RWX) rflags |= 0x2; + else { + /* + * PAGE_NONE will get mapped to 0b110 (slb key 1 no access) + * We picked 0b110 instead of 0b000 so that slb key 0 will + * get only read only access for the same rflags. + */ + if (mmu_has_feature(MMU_FTR_KERNEL_RO)) + rflags |= (HPTE_R_PP0 | 0x2); + /* + * rflags = HPTE_R_N + * Without KERNEL_RO feature this will result in slb + * key 0 with read/write. But ISA only supports that. + * There is no key 1 no-access and key 0 read-only + * pp bit support. + */ + } if (!((pteflags & _PAGE_WRITE) && (pteflags & _PAGE_DIRTY))) rflags |= 0x1; }
There used to be a dependency on _PAGE_PRIVILEGED with pte_savedwrite. But that got dropped by commit 6a56ccbcf6c6 ("mm/autonuma: use can_change_(pte|pmd)_writable() to replace savedwrite") With this change numa fault pte (pte_protnone()) gets mapped as regular user pte with RWX cleared (no-access). This also remove pte_user() from book3s/64. pte_access_permitted() now checks for _PAGE_EXEC because we now support EXECONLY mappings. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> --- arch/powerpc/include/asm/book3s/64/pgtable.h | 23 +++++--------------- arch/powerpc/mm/book3s64/hash_utils.c | 17 +++++++++++++++ 2 files changed, 23 insertions(+), 17 deletions(-)