Message ID | 20180504123834.GA16581@lst.de (mailing list archive) |
---|---|
State | Accepted |
Commit | df78d3f6148092d33a9a24c7a9cfac3d0220b484 |
Headers | show |
Series | [v3] On ppc64le we HAVE_RELIABLE_STACKTRACE | expand |
On Fri, May 04, 2018 at 02:38:34PM +0200, Torsten Duwe wrote: > > The "Power Architecture 64-Bit ELF V2 ABI" says in section 2.3.2.3: > > [...] There are several rules that must be adhered to in order to ensure > reliable and consistent call chain backtracing: > > * Before a function calls any other function, it shall establish its > own stack frame, whose size shall be a multiple of 16 bytes. > > – In instances where a function’s prologue creates a stack frame, the > back-chain word of the stack frame shall be updated atomically with > the value of the stack pointer (r1) when a back chain is implemented. > (This must be supported as default by all ELF V2 ABI-compliant > environments.) > [...] > – The function shall save the link register that contains its return > address in the LR save doubleword of its caller’s stack frame before > calling another function. > > To me this sounds like the equivalent of HAVE_RELIABLE_STACKTRACE. > This patch may be unneccessarily limited to ppc64le, but OTOH the only > user of this flag so far is livepatching, which is only implemented on > PPCs with 64-LE, a.k.a. ELF ABI v2. > > Feel free to add other ppc variants, but so far only ppc64le got tested. > > This change also implements save_stack_trace_tsk_reliable() for ppc64le > that checks for the above conditions, where possible. > > Signed-off-by: Torsten Duwe <duwe@suse.de> > Signed-off-by: Nicolai Stange <nstange@suse.de> The subject doesn't actively describe what the patch does, maybe change it to something like: powerpc: Add support for HAVE_RELIABLE_STACKTRACE or maybe powerpc: Add support for livepatch consistency model Otherwise it looks great to me. Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>
On Mon, 7 May 2018 10:42:08 -0500 Josh Poimboeuf <jpoimboe@redhat.com> wrote: > The subject doesn't actively describe what the patch does, maybe > change it to something like: > > powerpc: Add support for HAVE_RELIABLE_STACKTRACE > > or maybe > > powerpc: Add support for livepatch consistency model Maybe $SUBJECT? You're absolutely right, the old subject was just a leftover of my original attempt to just set the flag, before Miroslav corrected me. I just kept on copying it without a second thought. Thanks for noting. > Otherwise it looks great to me. > > Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> > Thanks. Torsten
On Tue, May 08, 2018 at 10:38:32AM +0200, Torsten Duwe wrote: > On Mon, 7 May 2018 10:42:08 -0500 > Josh Poimboeuf <jpoimboe@redhat.com> wrote: > > > The subject doesn't actively describe what the patch does, maybe > > change it to something like: > > > > powerpc: Add support for HAVE_RELIABLE_STACKTRACE > > > > or maybe > > > > powerpc: Add support for livepatch consistency model > > Maybe $SUBJECT? You're absolutely right, the old subject was just a > leftover of my original attempt to just set the flag, before Miroslav > corrected me. I just kept on copying it without a second thought. Thanks > for noting. Generally we refer to it as "the consistency model". So I would propose a minor edit: ppc64le/livepatch: Implement reliable stack tracing for the consistency model > > Otherwise it looks great to me. > > > > Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> > > > Thanks. > > Torsten >
Josh Poimboeuf <jpoimboe@redhat.com> writes: > On Tue, May 08, 2018 at 10:38:32AM +0200, Torsten Duwe wrote: >> On Mon, 7 May 2018 10:42:08 -0500 >> Josh Poimboeuf <jpoimboe@redhat.com> wrote: >> >> > The subject doesn't actively describe what the patch does, maybe >> > change it to something like: >> > >> > powerpc: Add support for HAVE_RELIABLE_STACKTRACE >> > >> > or maybe >> > >> > powerpc: Add support for livepatch consistency model >> >> Maybe $SUBJECT? You're absolutely right, the old subject was just a >> leftover of my original attempt to just set the flag, before Miroslav >> corrected me. I just kept on copying it without a second thought. Thanks >> for noting. > > Generally we refer to it as "the consistency model". So I would > propose a minor edit: > > ppc64le/livepatch: Implement reliable stack tracing for the consistency model We use "powerpc" as the prefix. So I've used: powerpc/livepatch: Implement reliable stack tracing for the consistency model cheers
On Wed, May 09, 2018 at 11:41:09AM +1000, Michael Ellerman wrote: > Josh Poimboeuf <jpoimboe@redhat.com> writes: > > > Generally we refer to it as "the consistency model". [...] > > We use "powerpc" as the prefix. > > So I've used: > > powerpc/livepatch: Implement reliable stack tracing for the consistency model Perfect. Thanks! Torsten
On Fri, 2018-05-04 at 12:38:34 UTC, Torsten Duwe wrote: > The "Power Architecture 64-Bit ELF V2 ABI" says in section 2.3.2.3: > > [...] There are several rules that must be adhered to in order to ensure > reliable and consistent call chain backtracing: > > * Before a function calls any other function, it shall establish its > own stack frame, whose size shall be a multiple of 16 bytes. > > – In instances where a function’s prologue creates a stack frame, the > back-chain word of the stack frame shall be updated atomically with > the value of the stack pointer (r1) when a back chain is implemented. > (This must be supported as default by all ELF V2 ABI-compliant > environments.) > [...] > – The function shall save the link register that contains its return > address in the LR save doubleword of its caller’s stack frame before > calling another function. > > To me this sounds like the equivalent of HAVE_RELIABLE_STACKTRACE. > This patch may be unneccessarily limited to ppc64le, but OTOH the only > user of this flag so far is livepatching, which is only implemented on > PPCs with 64-LE, a.k.a. ELF ABI v2. > > Feel free to add other ppc variants, but so far only ppc64le got tested. > > This change also implements save_stack_trace_tsk_reliable() for ppc64le > that checks for the above conditions, where possible. > > Signed-off-by: Torsten Duwe <duwe@suse.de> > Signed-off-by: Nicolai Stange <nstange@suse.de> > Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Applied to powerpc next, thanks. https://git.kernel.org/powerpc/c/df78d3f6148092d33a9a24c7a9cfac cheers
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index c32a181a7cbb..54f1daf4f9e5 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -220,6 +220,7 @@ config PPC select HAVE_PERF_USER_STACK_DUMP select HAVE_RCU_TABLE_FREE if SMP select HAVE_REGS_AND_STACK_ACCESS_API + select HAVE_RELIABLE_STACKTRACE if PPC64 && CPU_LITTLE_ENDIAN select HAVE_SYSCALL_TRACEPOINTS select HAVE_VIRT_CPU_ACCOUNTING select HAVE_IRQ_TIME_ACCOUNTING diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c index d534ed901538..3d62ecb2587b 100644 --- a/arch/powerpc/kernel/stacktrace.c +++ b/arch/powerpc/kernel/stacktrace.c @@ -2,7 +2,7 @@ * Stack trace utility * * Copyright 2008 Christoph Hellwig, IBM Corp. - * + * Copyright 2018 SUSE Linux GmbH * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License @@ -11,11 +11,16 @@ */ #include <linux/export.h> +#include <linux/kallsyms.h> +#include <linux/module.h> #include <linux/sched.h> #include <linux/sched/debug.h> +#include <linux/sched/task_stack.h> #include <linux/stacktrace.h> #include <asm/ptrace.h> #include <asm/processor.h> +#include <linux/ftrace.h> +#include <asm/kprobes.h> /* * Save stack-backtrace addresses into a stack_trace buffer. @@ -76,3 +81,114 @@ save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace) save_context_stack(trace, regs->gpr[1], current, 0); } EXPORT_SYMBOL_GPL(save_stack_trace_regs); + +#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE +int +save_stack_trace_tsk_reliable(struct task_struct *tsk, + struct stack_trace *trace) +{ + unsigned long sp; + unsigned long stack_page = (unsigned long)task_stack_page(tsk); + unsigned long stack_end; + int graph_idx = 0; + + /* The last frame (unwinding first) may not yet have saved + * its LR onto the stack. + */ + int firstframe = 1; + + if (tsk == current) + sp = current_stack_pointer(); + else + sp = tsk->thread.ksp; + + stack_end = stack_page + THREAD_SIZE; + if (!is_idle_task(tsk)) { + /* + * For user tasks, this is the SP value loaded on + * kernel entry, see "PACAKSAVE(r13)" in _switch() and + * system_call_common()/EXCEPTION_PROLOG_COMMON(). + * + * Likewise for non-swapper kernel threads, + * this also happens to be the top of the stack + * as setup by copy_thread(). + * + * Note that stack backlinks are not properly setup by + * copy_thread() and thus, a forked task() will have + * an unreliable stack trace until it's been + * _switch()'ed to for the first time. + */ + stack_end -= STACK_FRAME_OVERHEAD + sizeof(struct pt_regs); + } else { + /* + * idle tasks have a custom stack layout, + * c.f. cpu_idle_thread_init(). + */ + stack_end -= STACK_FRAME_OVERHEAD; + } + + if (sp < stack_page + sizeof(struct thread_struct) || + sp > stack_end - STACK_FRAME_MIN_SIZE) { + return 1; + } + + for (;;) { + unsigned long *stack = (unsigned long *) sp; + unsigned long newsp, ip; + + /* sanity check: ABI requires SP to be aligned 16 bytes. */ + if (sp & 0xF) + return 1; + + /* Mark stacktraces with exception frames as unreliable. */ + if (sp <= stack_end - STACK_INT_FRAME_SIZE && + stack[STACK_FRAME_MARKER] == STACK_FRAME_REGS_MARKER) { + return 1; + } + + newsp = stack[0]; + /* Stack grows downwards; unwinder may only go up. */ + if (newsp <= sp) + return 1; + + if (newsp != stack_end && + newsp > stack_end - STACK_FRAME_MIN_SIZE) { + return 1; /* invalid backlink, too far up. */ + } + + /* Examine the saved LR: it must point into kernel code. */ + ip = stack[STACK_FRAME_LR_SAVE]; + if (!firstframe && !__kernel_text_address(ip)) + return 1; + firstframe = 0; + + /* + * FIXME: IMHO these tests do not belong in + * arch-dependent code, they are generic. + */ + ip = ftrace_graph_ret_addr(tsk, &graph_idx, ip, NULL); + + /* + * Mark stacktraces with kretprobed functions on them + * as unreliable. + */ + if (ip == (unsigned long)kretprobe_trampoline) + return 1; + + if (!trace->skip) + trace->entries[trace->nr_entries++] = ip; + else + trace->skip--; + + if (newsp == stack_end) + break; + + if (trace->nr_entries >= trace->max_entries) + return -E2BIG; + + sp = newsp; + } + return 0; +} +EXPORT_SYMBOL_GPL(save_stack_trace_tsk_reliable); +#endif /* CONFIG_HAVE_RELIABLE_STACKTRACE */