Message ID | 20231204093638.71503-1-aneesh.kumar@kernel.org (mailing list archive) |
---|---|
State | Accepted |
Commit | 773b93f1d1c38c5c0d5308b8c9229c7a6ec5b2a0 |
Headers | show |
Series | [v2,1/2] powerpc/book3s/hash: Drop _PAGE_PRIVILEGED from PAGE_NONE | expand |
Le 04/12/2023 à 10:36, aneesh.kumar@kernel.org a écrit : > From: "Aneesh Kumar K.V (IBM)" <aneesh.kumar@kernel.org> > > There used to be a dependency on _PAGE_PRIVILEGED with pte_savedwrite. > But that got dropped by > commit 6a56ccbcf6c6 ("mm/autonuma: use can_change_(pte|pmd)_writable() to replace savedwrite") > > With the change in this patch numa fault pte (pte_protnone()) gets mapped as regular user pte > with RWX cleared (no-access) whereas earlier it used to be mapped _PAGE_PRIVILEGED. > > Hash fault handling code gets some WARN_ON added in this patch because > those functions are not expected to get called with _PAGE_READ cleared. > commit 18061c17c8ec ("powerpc/mm: Update PROTFAULT handling in the page > fault path") explains the details. > > Signed-off-by: Aneesh Kumar K.V (IBM) <aneesh.kumar@kernel.org> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> > --- > arch/powerpc/include/asm/book3s/64/pgtable.h | 10 ++-------- > arch/powerpc/mm/book3s64/hash_utils.c | 7 +++++++ > 2 files changed, 9 insertions(+), 8 deletions(-) > > diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h > index cb77eddca54b..927d585652bc 100644 > --- a/arch/powerpc/include/asm/book3s/64/pgtable.h > +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h > @@ -17,12 +17,6 @@ > #define _PAGE_EXEC 0x00001 /* execute permission */ > #define _PAGE_WRITE 0x00002 /* write access allowed */ > #define _PAGE_READ 0x00004 /* read access allowed */ > -#define _PAGE_NA _PAGE_PRIVILEGED > -#define _PAGE_NAX _PAGE_EXEC > -#define _PAGE_RO _PAGE_READ > -#define _PAGE_ROX (_PAGE_READ | _PAGE_EXEC) > -#define _PAGE_RW (_PAGE_READ | _PAGE_WRITE) > -#define _PAGE_RWX (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC) > #define _PAGE_PRIVILEGED 0x00008 /* kernel access only */ > #define _PAGE_SAO 0x00010 /* Strong access order */ > #define _PAGE_NON_IDEMPOTENT 0x00020 /* non idempotent memory */ > @@ -532,8 +526,8 @@ static inline bool pte_user(pte_t pte) > static inline bool pte_access_permitted(pte_t pte, bool write) > { > /* > - * _PAGE_READ is needed for any access and will be > - * cleared for PROT_NONE > + * _PAGE_READ is needed for any access and will be cleared for > + * PROT_NONE. Execute-only mapping via PROT_EXEC also returns false. > */ > if (!pte_present(pte) || !pte_user(pte) || !pte_read(pte)) > return false; > diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c > index ad2afa08e62e..0626a25b0d72 100644 > --- a/arch/powerpc/mm/book3s64/hash_utils.c > +++ b/arch/powerpc/mm/book3s64/hash_utils.c > @@ -310,9 +310,16 @@ unsigned long htab_convert_pte_flags(unsigned long pteflags, unsigned long flags > else > rflags |= 0x3; > } > + VM_WARN_ONCE(!(pteflags & _PAGE_RWX), "no-access mapping request"); > } else { > if (pteflags & _PAGE_RWX) > rflags |= 0x2; > + /* > + * We should never hit this in normal fault handling because > + * a permission check (check_pte_access()) will bubble this > + * to higher level linux handler even for PAGE_NONE. > + */ > + VM_WARN_ONCE(!(pteflags & _PAGE_RWX), "no-access mapping request"); > if (!((pteflags & _PAGE_WRITE) && (pteflags & _PAGE_DIRTY))) > rflags |= 0x1; > }
On Mon, 04 Dec 2023 15:06:37 +0530, aneesh.kumar@kernel.org wrote: > There used to be a dependency on _PAGE_PRIVILEGED with pte_savedwrite. > But that got dropped by > commit 6a56ccbcf6c6 ("mm/autonuma: use can_change_(pte|pmd)_writable() to replace savedwrite") > > With the change in this patch numa fault pte (pte_protnone()) gets mapped as regular user pte > with RWX cleared (no-access) whereas earlier it used to be mapped _PAGE_PRIVILEGED. > > [...] Applied to powerpc/next. [1/2] powerpc/book3s/hash: Drop _PAGE_PRIVILEGED from PAGE_NONE https://git.kernel.org/powerpc/c/773b93f1d1c38c5c0d5308b8c9229c7a6ec5b2a0 [2/2] powerpc/book3s64: Avoid __pte_protnone() check in __pte_flags_need_flush() https://git.kernel.org/powerpc/c/a59c14f6b4caad7671dfb81737beba0b313897e4 cheers
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index cb77eddca54b..927d585652bc 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -17,12 +17,6 @@ #define _PAGE_EXEC 0x00001 /* execute permission */ #define _PAGE_WRITE 0x00002 /* write access allowed */ #define _PAGE_READ 0x00004 /* read access allowed */ -#define _PAGE_NA _PAGE_PRIVILEGED -#define _PAGE_NAX _PAGE_EXEC -#define _PAGE_RO _PAGE_READ -#define _PAGE_ROX (_PAGE_READ | _PAGE_EXEC) -#define _PAGE_RW (_PAGE_READ | _PAGE_WRITE) -#define _PAGE_RWX (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC) #define _PAGE_PRIVILEGED 0x00008 /* kernel access only */ #define _PAGE_SAO 0x00010 /* Strong access order */ #define _PAGE_NON_IDEMPOTENT 0x00020 /* non idempotent memory */ @@ -532,8 +526,8 @@ static inline bool pte_user(pte_t pte) static inline bool pte_access_permitted(pte_t pte, bool write) { /* - * _PAGE_READ is needed for any access and will be - * cleared for PROT_NONE + * _PAGE_READ is needed for any access and will be cleared for + * PROT_NONE. Execute-only mapping via PROT_EXEC also returns false. */ if (!pte_present(pte) || !pte_user(pte) || !pte_read(pte)) return false; diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c index ad2afa08e62e..0626a25b0d72 100644 --- a/arch/powerpc/mm/book3s64/hash_utils.c +++ b/arch/powerpc/mm/book3s64/hash_utils.c @@ -310,9 +310,16 @@ unsigned long htab_convert_pte_flags(unsigned long pteflags, unsigned long flags else rflags |= 0x3; } + VM_WARN_ONCE(!(pteflags & _PAGE_RWX), "no-access mapping request"); } else { if (pteflags & _PAGE_RWX) rflags |= 0x2; + /* + * We should never hit this in normal fault handling because + * a permission check (check_pte_access()) will bubble this + * to higher level linux handler even for PAGE_NONE. + */ + VM_WARN_ONCE(!(pteflags & _PAGE_RWX), "no-access mapping request"); if (!((pteflags & _PAGE_WRITE) && (pteflags & _PAGE_DIRTY))) rflags |= 0x1; }