Message ID | 20240410043006.81577-1-mahesh@linux.ibm.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 0db880fc865ffb522141ced4bfa66c12ab1fbb70 |
Headers | show |
Series | [v5] powerpc: Avoid nmi_enter/nmi_exit in real mode interrupt. | expand |
Context | Check | Description |
---|---|---|
snowpatch_ozlabs/github-powerpc_ppctests | success | Successfully ran 8 jobs. |
snowpatch_ozlabs/github-powerpc_selftests | success | Successfully ran 8 jobs. |
snowpatch_ozlabs/github-powerpc_sparse | success | Successfully ran 4 jobs. |
snowpatch_ozlabs/github-powerpc_clang | success | Successfully ran 6 jobs. |
snowpatch_ozlabs/github-powerpc_kernel_qemu | success | Successfully ran 23 jobs. |
On Wed, 2024-04-10 at 10:00 +0530, Mahesh Salgaonkar wrote: > nmi_enter()/nmi_exit() touches per cpu variables which can lead to > kernel > crash when invoked during real mode interrupt handling (e.g. early > HMI/MCE > interrupt handler) if percpu allocation comes from vmalloc area. > > Early HMI/MCE handlers are called through > DEFINE_INTERRUPT_HANDLER_NMI() > wrapper which invokes nmi_enter/nmi_exit calls. We don't see any > issue when > percpu allocation is from the embedded first chunk. However with > CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK enabled there are chances where > percpu > allocation can come from the vmalloc area. > > With kernel command line "percpu_alloc=page" we can force percpu > allocation > to come from vmalloc area and can see kernel crash in > machine_check_early: > > [ 1.215714] NIP [c000000000e49eb4] rcu_nmi_enter+0x24/0x110 > [ 1.215717] LR [c0000000000461a0] machine_check_early+0xf0/0x2c0 > [ 1.215719] --- interrupt: 200 > [ 1.215720] [c000000fffd73180] [0000000000000000] 0x0 (unreliable) > [ 1.215722] [c000000fffd731b0] [0000000000000000] 0x0 > [ 1.215724] [c000000fffd73210] [c000000000008364] > machine_check_early_common+0x134/0x1f8 > > Fix this by avoiding use of nmi_enter()/nmi_exit() in real mode if > percpu > first chunk is not embedded. > > Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> > Tested-by: Shirisha Ganta <shirisha@linux.ibm.com> > Signed-off-by: Mahesh Salgaonkar <mahesh@linux.ibm.com> Thanks for the Patch. I have tested the patch and the fix works fine. selftests/powerpc/mce/inject-ra-err testcase is working as expected after enabling percpu_alloc=page with the patch applied. Output with Patch: # ./inject-ra-err test: inject-ra-err tags: git_version:unknown success: inject-ra-err # Tested-by: Shirisha Ganta <shirisha@linux.ibm.com> > --- > Changes in v5: > - Invert the percpu chunk checks as suggested by mpe. > - Fix mirowatt build, microwatt config has CONFIG_PPC64=y and > CONFIG_SMP=n. > - v4 at > https://lore.kernel.org/linuxppc-dev/20240214095146.1527369-1-mahesh@linux.ibm.com/ > > Changes in v4: > - Fix coding style issues. > > Changes in v3: > - Address comments from Christophe Leroy to avoid using #ifdefs in > the > code > - v2 at > https://lore.kernel.org/linuxppc-dev/20240205053647.1763446-1-mahesh@linux.ibm.com/ > > Changes in v2: > - Rebase to upstream master > - Use jump_labels, if CONFIG_JUMP_LABEL is enabled, to avoid redoing > the > embed first chunk test at each interrupt entry. > - v1 is at > https://lore.kernel.org/linuxppc-dev/164578465828.74956.6065296024817333750.stgit@jupiter/ > --- > arch/powerpc/include/asm/interrupt.h | 10 ++++++++++ > arch/powerpc/include/asm/percpu.h | 10 ++++++++++ > arch/powerpc/kernel/setup_64.c | 2 ++ > 3 files changed, 22 insertions(+) > > diff --git a/arch/powerpc/include/asm/interrupt.h > b/arch/powerpc/include/asm/interrupt.h > index 7b610864b3645..2d6c886b40f44 100644 > --- a/arch/powerpc/include/asm/interrupt.h > +++ b/arch/powerpc/include/asm/interrupt.h > @@ -336,6 +336,14 @@ static inline void > interrupt_nmi_enter_prepare(struct pt_regs *regs, struct inte > if (IS_ENABLED(CONFIG_KASAN)) > return; > > + /* > + * Likewise, do not use it in real mode if percpu first chunk > is not > + * embedded. With CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK enabled > there > + * are chances where percpu allocation can come from vmalloc > area. > + */ > + if (percpu_first_chunk_is_paged) > + return; > + > /* Otherwise, it should be safe to call it */ > nmi_enter(); > } > @@ -351,6 +359,8 @@ static inline void > interrupt_nmi_exit_prepare(struct pt_regs *regs, struct inter > // no nmi_exit for a pseries hash guest taking a real > mode exception > } else if (IS_ENABLED(CONFIG_KASAN)) { > // no nmi_exit for KASAN in real mode > + } else if (percpu_first_chunk_is_paged) { > + // no nmi_exit if percpu first chunk is not embedded > } else { > nmi_exit(); > } > diff --git a/arch/powerpc/include/asm/percpu.h > b/arch/powerpc/include/asm/percpu.h > index 8e5b7d0b851c6..634970ce13c6b 100644 > --- a/arch/powerpc/include/asm/percpu.h > +++ b/arch/powerpc/include/asm/percpu.h > @@ -15,6 +15,16 @@ > #endif /* CONFIG_SMP */ > #endif /* __powerpc64__ */ > > +#if defined(CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK) && > defined(CONFIG_SMP) > +#include <linux/jump_label.h> > +DECLARE_STATIC_KEY_FALSE(__percpu_first_chunk_is_paged); > + > +#define percpu_first_chunk_is_paged \ > + (static_key_enabled(&__percpu_first_chunk_is_paged.key) > ) > +#else > +#define percpu_first_chunk_is_paged false > +#endif /* CONFIG_PPC64 && CONFIG_SMP */ > + > #include <asm-generic/percpu.h> > > #include <asm/paca.h> > diff --git a/arch/powerpc/kernel/setup_64.c > b/arch/powerpc/kernel/setup_64.c > index 2f19d5e944852..ae36a129789ff 100644 > --- a/arch/powerpc/kernel/setup_64.c > +++ b/arch/powerpc/kernel/setup_64.c > @@ -834,6 +834,7 @@ static __init int pcpu_cpu_to_node(int cpu) > > unsigned long __per_cpu_offset[NR_CPUS] __read_mostly; > EXPORT_SYMBOL(__per_cpu_offset); > +DEFINE_STATIC_KEY_FALSE(__percpu_first_chunk_is_paged); > > void __init setup_per_cpu_areas(void) > { > @@ -876,6 +877,7 @@ void __init setup_per_cpu_areas(void) > if (rc < 0) > panic("cannot initialize percpu area (err=%d)", rc); > > + static_key_enable(&__percpu_first_chunk_is_paged.key); > delta = (unsigned long)pcpu_base_addr - (unsigned > long)__per_cpu_start; > for_each_possible_cpu(cpu) { > __per_cpu_offset[cpu] = delta + > pcpu_unit_offsets[cpu];
On Wed, 10 Apr 2024 10:00:06 +0530, Mahesh Salgaonkar wrote: > nmi_enter()/nmi_exit() touches per cpu variables which can lead to kernel > crash when invoked during real mode interrupt handling (e.g. early HMI/MCE > interrupt handler) if percpu allocation comes from vmalloc area. > > Early HMI/MCE handlers are called through DEFINE_INTERRUPT_HANDLER_NMI() > wrapper which invokes nmi_enter/nmi_exit calls. We don't see any issue when > percpu allocation is from the embedded first chunk. However with > CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK enabled there are chances where percpu > allocation can come from the vmalloc area. > > [...] Applied to powerpc/next. [1/1] powerpc: Avoid nmi_enter/nmi_exit in real mode interrupt. https://git.kernel.org/powerpc/c/0db880fc865ffb522141ced4bfa66c12ab1fbb70 cheers
diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h index 7b610864b3645..2d6c886b40f44 100644 --- a/arch/powerpc/include/asm/interrupt.h +++ b/arch/powerpc/include/asm/interrupt.h @@ -336,6 +336,14 @@ static inline void interrupt_nmi_enter_prepare(struct pt_regs *regs, struct inte if (IS_ENABLED(CONFIG_KASAN)) return; + /* + * Likewise, do not use it in real mode if percpu first chunk is not + * embedded. With CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK enabled there + * are chances where percpu allocation can come from vmalloc area. + */ + if (percpu_first_chunk_is_paged) + return; + /* Otherwise, it should be safe to call it */ nmi_enter(); } @@ -351,6 +359,8 @@ static inline void interrupt_nmi_exit_prepare(struct pt_regs *regs, struct inter // no nmi_exit for a pseries hash guest taking a real mode exception } else if (IS_ENABLED(CONFIG_KASAN)) { // no nmi_exit for KASAN in real mode + } else if (percpu_first_chunk_is_paged) { + // no nmi_exit if percpu first chunk is not embedded } else { nmi_exit(); } diff --git a/arch/powerpc/include/asm/percpu.h b/arch/powerpc/include/asm/percpu.h index 8e5b7d0b851c6..634970ce13c6b 100644 --- a/arch/powerpc/include/asm/percpu.h +++ b/arch/powerpc/include/asm/percpu.h @@ -15,6 +15,16 @@ #endif /* CONFIG_SMP */ #endif /* __powerpc64__ */ +#if defined(CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK) && defined(CONFIG_SMP) +#include <linux/jump_label.h> +DECLARE_STATIC_KEY_FALSE(__percpu_first_chunk_is_paged); + +#define percpu_first_chunk_is_paged \ + (static_key_enabled(&__percpu_first_chunk_is_paged.key)) +#else +#define percpu_first_chunk_is_paged false +#endif /* CONFIG_PPC64 && CONFIG_SMP */ + #include <asm-generic/percpu.h> #include <asm/paca.h> diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c index 2f19d5e944852..ae36a129789ff 100644 --- a/arch/powerpc/kernel/setup_64.c +++ b/arch/powerpc/kernel/setup_64.c @@ -834,6 +834,7 @@ static __init int pcpu_cpu_to_node(int cpu) unsigned long __per_cpu_offset[NR_CPUS] __read_mostly; EXPORT_SYMBOL(__per_cpu_offset); +DEFINE_STATIC_KEY_FALSE(__percpu_first_chunk_is_paged); void __init setup_per_cpu_areas(void) { @@ -876,6 +877,7 @@ void __init setup_per_cpu_areas(void) if (rc < 0) panic("cannot initialize percpu area (err=%d)", rc); + static_key_enable(&__percpu_first_chunk_is_paged.key); delta = (unsigned long)pcpu_base_addr - (unsigned long)__per_cpu_start; for_each_possible_cpu(cpu) { __per_cpu_offset[cpu] = delta + pcpu_unit_offsets[cpu];