Message ID | 20231012195415.282357-2-willy@infradead.org (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | Allow nesting of lazy MMU mode | expand |
diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h index 146287d9580f..bc845d876ed2 100644 --- a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h +++ b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h @@ -38,7 +38,7 @@ static inline void arch_enter_lazy_mmu_mode(void) */ preempt_disable(); batch = this_cpu_ptr(&ppc64_tlb_batch); - batch->active = 1; + batch->active++; } static inline void arch_leave_lazy_mmu_mode(void) @@ -49,9 +49,8 @@ static inline void arch_leave_lazy_mmu_mode(void) return; batch = this_cpu_ptr(&ppc64_tlb_batch); - if (batch->index) + if ((--batch->active == 0) && batch->index) __flush_tlb_pending(batch); - batch->active = 0; preempt_enable(); }
As noted in commit 49147beb0ccb ("x86/xen: allow nesting of same lazy mode"), we can now nest calls to arch_enter_lazy_mmu_mode(). Use ->active as a counter instead of a flag and only drain the batch when the counter hits 0. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Fixes: bcc6cc832573 ("mm: add default definition of set_ptes()") --- arch/powerpc/include/asm/book3s/64/tlbflush-hash.h | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-)