Message ID | 20231214055539.9420-10-nicholas@linux.ibm.com (mailing list archive) |
---|---|
State | Changes Requested |
Headers | show |
Series | kmsan: Enable on powerpc | expand |
Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit : > Functions which walk the stack read parts of the stack which cannot be > instrumented by KMSAN e.g. the backchain. Disable KMSAN sanitization of > these functions to prevent false positives. Do other architectures have to do it as well ? I don't see it for show_stack(), is that a specific need for powerpc ? > > Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com> > --- > arch/powerpc/kernel/process.c | 6 +++--- > arch/powerpc/kernel/stacktrace.c | 10 ++++++---- > arch/powerpc/perf/callchain.c | 2 +- > 3 files changed, 10 insertions(+), 8 deletions(-) > > diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c > index 392404688cec..3dc88143c3b2 100644 > --- a/arch/powerpc/kernel/process.c > +++ b/arch/powerpc/kernel/process.c > @@ -2276,9 +2276,9 @@ static bool empty_user_regs(struct pt_regs *regs, struct task_struct *tsk) > > static int kstack_depth_to_print = CONFIG_PRINT_STACK_DEPTH; > > -void __no_sanitize_address show_stack(struct task_struct *tsk, > - unsigned long *stack, > - const char *loglvl) > +void __no_sanitize_address __no_kmsan_checks show_stack(struct task_struct *tsk, > + unsigned long *stack, > + const char *loglvl) > { > unsigned long sp, ip, lr, newsp; > int count = 0; > diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c > index e6a958a5da27..369b8b2a1bcd 100644 > --- a/arch/powerpc/kernel/stacktrace.c > +++ b/arch/powerpc/kernel/stacktrace.c > @@ -24,8 +24,9 @@ > > #include <asm/paca.h> > > -void __no_sanitize_address arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie, > - struct task_struct *task, struct pt_regs *regs) > +void __no_sanitize_address __no_kmsan_checks > + arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie, > + struct task_struct *task, struct pt_regs *regs) > { > unsigned long sp; > > @@ -62,8 +63,9 @@ void __no_sanitize_address arch_stack_walk(stack_trace_consume_fn consume_entry, > * > * If the task is not 'current', the caller *must* ensure the task is inactive. > */ > -int __no_sanitize_address arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, > - void *cookie, struct task_struct *task) > +int __no_sanitize_address __no_kmsan_checks > + arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void *cookie, > + struct task_struct *task) > { > unsigned long sp; > unsigned long newsp; > diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c > index 6b4434dd0ff3..c7610b38e9b8 100644 > --- a/arch/powerpc/perf/callchain.c > +++ b/arch/powerpc/perf/callchain.c > @@ -40,7 +40,7 @@ static int valid_next_sp(unsigned long sp, unsigned long prev_sp) > return 0; > } > > -void __no_sanitize_address > +void __no_sanitize_address __no_kmsan_checks > perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs) > { > unsigned long sp, next_sp;
Nicholas Miehlbradt <nicholas@linux.ibm.com> writes: > Functions which walk the stack read parts of the stack which cannot be > instrumented by KMSAN e.g. the backchain. Disable KMSAN sanitization of > these functions to prevent false positives. > Is the annotation needed to avoid uninitialized access check when reading parts of the stack? Can you provide a false positive example for the commit message? -aneesh
On 14/12/2023 8:00 pm, Christophe Leroy wrote: > > > Le 14/12/2023 à 06:55, Nicholas Miehlbradt a écrit : >> Functions which walk the stack read parts of the stack which cannot be >> instrumented by KMSAN e.g. the backchain. Disable KMSAN sanitization of >> these functions to prevent false positives. > > Do other architectures have to do it as well ? > > I don't see it for show_stack(), is that a specific need for powerpc ? > Other archs have the annotation on functions called by show_stack(). For x86 it's on show_trace_log_lvl() and for s390 it's on __unwind_start() and unwind_next_frame().
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index 392404688cec..3dc88143c3b2 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -2276,9 +2276,9 @@ static bool empty_user_regs(struct pt_regs *regs, struct task_struct *tsk) static int kstack_depth_to_print = CONFIG_PRINT_STACK_DEPTH; -void __no_sanitize_address show_stack(struct task_struct *tsk, - unsigned long *stack, - const char *loglvl) +void __no_sanitize_address __no_kmsan_checks show_stack(struct task_struct *tsk, + unsigned long *stack, + const char *loglvl) { unsigned long sp, ip, lr, newsp; int count = 0; diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c index e6a958a5da27..369b8b2a1bcd 100644 --- a/arch/powerpc/kernel/stacktrace.c +++ b/arch/powerpc/kernel/stacktrace.c @@ -24,8 +24,9 @@ #include <asm/paca.h> -void __no_sanitize_address arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie, - struct task_struct *task, struct pt_regs *regs) +void __no_sanitize_address __no_kmsan_checks + arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie, + struct task_struct *task, struct pt_regs *regs) { unsigned long sp; @@ -62,8 +63,9 @@ void __no_sanitize_address arch_stack_walk(stack_trace_consume_fn consume_entry, * * If the task is not 'current', the caller *must* ensure the task is inactive. */ -int __no_sanitize_address arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, - void *cookie, struct task_struct *task) +int __no_sanitize_address __no_kmsan_checks + arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void *cookie, + struct task_struct *task) { unsigned long sp; unsigned long newsp; diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c index 6b4434dd0ff3..c7610b38e9b8 100644 --- a/arch/powerpc/perf/callchain.c +++ b/arch/powerpc/perf/callchain.c @@ -40,7 +40,7 @@ static int valid_next_sp(unsigned long sp, unsigned long prev_sp) return 0; } -void __no_sanitize_address +void __no_sanitize_address __no_kmsan_checks perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs) { unsigned long sp, next_sp;
Functions which walk the stack read parts of the stack which cannot be instrumented by KMSAN e.g. the backchain. Disable KMSAN sanitization of these functions to prevent false positives. Signed-off-by: Nicholas Miehlbradt <nicholas@linux.ibm.com> --- arch/powerpc/kernel/process.c | 6 +++--- arch/powerpc/kernel/stacktrace.c | 10 ++++++---- arch/powerpc/perf/callchain.c | 2 +- 3 files changed, 10 insertions(+), 8 deletions(-)