diff mbox series

[1/3] powerpc/64s: Disable preemption in hash lazy mmu mode

Message ID 20221013151647.1857994-1-npiggin@gmail.com (mailing list archive)
State Accepted
Headers show
Series [1/3] powerpc/64s: Disable preemption in hash lazy mmu mode | expand

Commit Message

Nicholas Piggin Oct. 13, 2022, 3:16 p.m. UTC
apply_to_page_range on kernel pages does not disable preemption, which
is a requirement for hash's lazy mmu mode, which keeps track of the
TLBs to flush with a per-cpu array.

Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/book3s/64/tlbflush-hash.h | 6 ++++++
 1 file changed, 6 insertions(+)

Comments

Christophe Leroy Oct. 13, 2022, 3:29 p.m. UTC | #1
Le 13/10/2022 à 17:16, Nicholas Piggin a écrit :
> apply_to_page_range on kernel pages does not disable preemption, which
> is a requirement for hash's lazy mmu mode, which keeps track of the
> TLBs to flush with a per-cpu array.
> 
> Reported-by: Guenter Roeck <linux@roeck-us.net>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>   arch/powerpc/include/asm/book3s/64/tlbflush-hash.h | 6 ++++++
>   1 file changed, 6 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
> index fab8332fe1ad..751921f6db46 100644
> --- a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
> +++ b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
> @@ -32,6 +32,11 @@ static inline void arch_enter_lazy_mmu_mode(void)
>   
>   	if (radix_enabled())
>   		return;
> +	/*
> +	 * apply_to_page_range can call us this preempt enabled when
> +	 * operating on kernel page tables.
> +	 */
> +	preempt_disable();
>   	batch = this_cpu_ptr(&ppc64_tlb_batch);
>   	batch->active = 1;
>   }
> @@ -47,6 +52,7 @@ static inline void arch_leave_lazy_mmu_mode(void)
>   	if (batch->index)
>   		__flush_tlb_pending(batch);
>   	batch->active = 0;
> +	preempt_enable();

You'll schedule() here. Is that acceptable in terms of performance ? 
Otherwise you have preempt_enable_no_resched()

>   }
>   
>   #define arch_flush_lazy_mmu_mode()      do {} while (0)
Guenter Roeck Oct. 14, 2022, 12:17 a.m. UTC | #2
On Fri, Oct 14, 2022 at 01:16:45AM +1000, Nicholas Piggin wrote:
> apply_to_page_range on kernel pages does not disable preemption, which
> is a requirement for hash's lazy mmu mode, which keeps track of the
> TLBs to flush with a per-cpu array.
> 
> Reported-by: Guenter Roeck <linux@roeck-us.net>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>

Tested-by: Guenter Roeck <linux@roeck-us.net>

> ---
>  arch/powerpc/include/asm/book3s/64/tlbflush-hash.h | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
> index fab8332fe1ad..751921f6db46 100644
> --- a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
> +++ b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
> @@ -32,6 +32,11 @@ static inline void arch_enter_lazy_mmu_mode(void)
>  
>  	if (radix_enabled())
>  		return;
> +	/*
> +	 * apply_to_page_range can call us this preempt enabled when
> +	 * operating on kernel page tables.
> +	 */
> +	preempt_disable();
>  	batch = this_cpu_ptr(&ppc64_tlb_batch);
>  	batch->active = 1;
>  }
> @@ -47,6 +52,7 @@ static inline void arch_leave_lazy_mmu_mode(void)
>  	if (batch->index)
>  		__flush_tlb_pending(batch);
>  	batch->active = 0;
> +	preempt_enable();
>  }
>  
>  #define arch_flush_lazy_mmu_mode()      do {} while (0)
> -- 
> 2.37.2
>
Michael Ellerman Oct. 28, 2022, 11:49 a.m. UTC | #3
On Fri, 14 Oct 2022 01:16:45 +1000, Nicholas Piggin wrote:
> apply_to_page_range on kernel pages does not disable preemption, which
> is a requirement for hash's lazy mmu mode, which keeps track of the
> TLBs to flush with a per-cpu array.
> 
> 

Applied to powerpc/fixes.

[1/3] powerpc/64s: Disable preemption in hash lazy mmu mode
      https://git.kernel.org/powerpc/c/b9ef323ea1682f9837bf63ba10c5e3750f71a20a
[2/3] powerpc/64s: Fix hash__change_memory_range preemption warning
      https://git.kernel.org/powerpc/c/2b2095f3a6b43ec36ff890febc588df1ec32e826
[3/3] powerpc: fix reschedule bug in KUAP-unlocked user copy
      https://git.kernel.org/powerpc/c/00ff1eaac129a24516a3f6d75adfb9df1efb55dd

cheers
diff mbox series

Patch

diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
index fab8332fe1ad..751921f6db46 100644
--- a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
@@ -32,6 +32,11 @@  static inline void arch_enter_lazy_mmu_mode(void)
 
 	if (radix_enabled())
 		return;
+	/*
+	 * apply_to_page_range can call us this preempt enabled when
+	 * operating on kernel page tables.
+	 */
+	preempt_disable();
 	batch = this_cpu_ptr(&ppc64_tlb_batch);
 	batch->active = 1;
 }
@@ -47,6 +52,7 @@  static inline void arch_leave_lazy_mmu_mode(void)
 	if (batch->index)
 		__flush_tlb_pending(batch);
 	batch->active = 0;
+	preempt_enable();
 }
 
 #define arch_flush_lazy_mmu_mode()      do {} while (0)