Message ID | 20211021035417.2157804-13-npiggin@gmail.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | powerpc: Make hash MMU code build configurable | expand |
Related | show |
Le 21/10/2021 à 05:54, Nicholas Piggin a écrit : > mmu_linear_psize is only set at boot once on 64e, is not necessarily > the correct size of the linear map pages, and is never used anywhere > except memremap_compat_align. > > Remove mmu_linear_psize and hard code the 1GB value instead in > memremap_compat_align. > > Signed-off-by: Nicholas Piggin <npiggin@gmail.com> > --- > arch/powerpc/mm/ioremap.c | 6 +++++- > arch/powerpc/mm/nohash/tlb.c | 9 --------- > 2 files changed, 5 insertions(+), 10 deletions(-) > > diff --git a/arch/powerpc/mm/ioremap.c b/arch/powerpc/mm/ioremap.c > index 57342154d2b0..730c3bbe4759 100644 > --- a/arch/powerpc/mm/ioremap.c > +++ b/arch/powerpc/mm/ioremap.c > @@ -109,12 +109,16 @@ void __iomem *do_ioremap(phys_addr_t pa, phys_addr_t offset, unsigned long size, > */ > unsigned long memremap_compat_align(void) > { > +#ifdef CONFIG_PPC_BOOK3E_64 I don't think this function really belongs to ioremap.c Could avoid the #ifdef by going in: arch/powerpc/mm/nohash/book3e_pgtable.c and arch/powerpc/mm/book3s64/pgtable.c > + // 1GB maximum possible size of the linear mapping. > + return max(SUBSECTION_SIZE, 1UL << 30); Use SZ_1G > +#else > unsigned int shift = mmu_psize_defs[mmu_linear_psize].shift; > > if (radix_enabled()) > return SUBSECTION_SIZE; > return max(SUBSECTION_SIZE, 1UL << shift); > - > +#endif > } > EXPORT_SYMBOL_GPL(memremap_compat_align); > #endif > diff --git a/arch/powerpc/mm/nohash/tlb.c b/arch/powerpc/mm/nohash/tlb.c > index 5872f69141d5..8c1523ae7f7f 100644 > --- a/arch/powerpc/mm/nohash/tlb.c > +++ b/arch/powerpc/mm/nohash/tlb.c > @@ -150,7 +150,6 @@ static inline int mmu_get_tsize(int psize) > */ > #ifdef CONFIG_PPC64 > > -int mmu_linear_psize; /* Page size used for the linear mapping */ > int mmu_pte_psize; /* Page size used for PTE pages */ > int mmu_vmemmap_psize; /* Page size used for the virtual mem map */ > int book3e_htw_mode; /* HW tablewalk? Value is PPC_HTW_* */ > @@ -655,14 +654,6 @@ static void early_init_this_mmu(void) > > static void __init early_init_mmu_global(void) > { > - /* XXX This will have to be decided at runtime, but right > - * now our boot and TLB miss code hard wires it. Ideally > - * we should find out a suitable page size and patch the > - * TLB miss code (either that or use the PACA to store > - * the value we want) > - */ > - mmu_linear_psize = MMU_PAGE_1G; > - > /* XXX This should be decided at runtime based on supported > * page sizes in the TLB, but for now let's assume 16M is > * always there and a good fit (which it probably is) >
Excerpts from Christophe Leroy's message of October 21, 2021 3:03 pm: > > > Le 21/10/2021 à 05:54, Nicholas Piggin a écrit : >> mmu_linear_psize is only set at boot once on 64e, is not necessarily >> the correct size of the linear map pages, and is never used anywhere >> except memremap_compat_align. >> >> Remove mmu_linear_psize and hard code the 1GB value instead in >> memremap_compat_align. >> >> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> >> --- >> arch/powerpc/mm/ioremap.c | 6 +++++- >> arch/powerpc/mm/nohash/tlb.c | 9 --------- >> 2 files changed, 5 insertions(+), 10 deletions(-) >> >> diff --git a/arch/powerpc/mm/ioremap.c b/arch/powerpc/mm/ioremap.c >> index 57342154d2b0..730c3bbe4759 100644 >> --- a/arch/powerpc/mm/ioremap.c >> +++ b/arch/powerpc/mm/ioremap.c >> @@ -109,12 +109,16 @@ void __iomem *do_ioremap(phys_addr_t pa, phys_addr_t offset, unsigned long size, >> */ >> unsigned long memremap_compat_align(void) >> { >> +#ifdef CONFIG_PPC_BOOK3E_64 > > I don't think this function really belongs to ioremap.c > > Could avoid the #ifdef by going in: > > arch/powerpc/mm/nohash/book3e_pgtable.c > > and > > arch/powerpc/mm/book3s64/pgtable.c Yeah that might work. Thanks, Nick
diff --git a/arch/powerpc/mm/ioremap.c b/arch/powerpc/mm/ioremap.c index 57342154d2b0..730c3bbe4759 100644 --- a/arch/powerpc/mm/ioremap.c +++ b/arch/powerpc/mm/ioremap.c @@ -109,12 +109,16 @@ void __iomem *do_ioremap(phys_addr_t pa, phys_addr_t offset, unsigned long size, */ unsigned long memremap_compat_align(void) { +#ifdef CONFIG_PPC_BOOK3E_64 + // 1GB maximum possible size of the linear mapping. + return max(SUBSECTION_SIZE, 1UL << 30); +#else unsigned int shift = mmu_psize_defs[mmu_linear_psize].shift; if (radix_enabled()) return SUBSECTION_SIZE; return max(SUBSECTION_SIZE, 1UL << shift); - +#endif } EXPORT_SYMBOL_GPL(memremap_compat_align); #endif diff --git a/arch/powerpc/mm/nohash/tlb.c b/arch/powerpc/mm/nohash/tlb.c index 5872f69141d5..8c1523ae7f7f 100644 --- a/arch/powerpc/mm/nohash/tlb.c +++ b/arch/powerpc/mm/nohash/tlb.c @@ -150,7 +150,6 @@ static inline int mmu_get_tsize(int psize) */ #ifdef CONFIG_PPC64 -int mmu_linear_psize; /* Page size used for the linear mapping */ int mmu_pte_psize; /* Page size used for PTE pages */ int mmu_vmemmap_psize; /* Page size used for the virtual mem map */ int book3e_htw_mode; /* HW tablewalk? Value is PPC_HTW_* */ @@ -655,14 +654,6 @@ static void early_init_this_mmu(void) static void __init early_init_mmu_global(void) { - /* XXX This will have to be decided at runtime, but right - * now our boot and TLB miss code hard wires it. Ideally - * we should find out a suitable page size and patch the - * TLB miss code (either that or use the PACA to store - * the value we want) - */ - mmu_linear_psize = MMU_PAGE_1G; - /* XXX This should be decided at runtime based on supported * page sizes in the TLB, but for now let's assume 16M is * always there and a good fit (which it probably is)
mmu_linear_psize is only set at boot once on 64e, is not necessarily the correct size of the linear map pages, and is never used anywhere except memremap_compat_align. Remove mmu_linear_psize and hard code the 1GB value instead in memremap_compat_align. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> --- arch/powerpc/mm/ioremap.c | 6 +++++- arch/powerpc/mm/nohash/tlb.c | 9 --------- 2 files changed, 5 insertions(+), 10 deletions(-)