Message ID | 1487063803-10848-2-git-send-email-ravi.bangoria@linux.vnet.ibm.com (mailing list archive) |
---|---|
State | Accepted |
Headers | show |
Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com> writes: > emulate_step() uses a number of underlying kernel functions that were > initially not enabled for LE. This has been rectified since. When exactly? ie. which commit. Should we backport this? ie. is it actually a bug people are hitting in the real world much? cheers
Thanks Michael, On Tuesday 14 February 2017 03:50 PM, Michael Ellerman wrote: > Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com> writes: > >> emulate_step() uses a number of underlying kernel functions that were >> initially not enabled for LE. This has been rectified since. > When exactly? ie. which commit. I found couple of commits: 6506b4718b ("powerpc: Fix Unaligned Loads and Stores") dbc2fbd7c2 ("powerpc: Fix Unaligned LE Floating Point Loads and Stores") There may be more. Patch2 is to test emulate_step() for basic load/store instructions and it seems to be working fine on LE. > > Should we backport this? ie. is it actually a bug people are hitting in > the real world much? Yes, we should backport this. kernel-space hw-breakpoint feature is broken on LE without this. This is on ppc64le: $ sudo cat /proc/kallsyms | grep pid_max c00000000116998c D pid_max $ sudo ./perf record -a --event=mem:0xc00000000116998c sleep 10 Before patch: It does not record any data and throws below warning. $ dmesg [ 817.895573] Unable to handle hardware breakpoint. Breakpoint at 0xc00000000116998c will be disabled. [ 817.895581] ------------[ cut here ]------------ [ 817.895588] WARNING: CPU: 24 PID: 2032 at arch/powerpc/kernel/hw_breakpoint.c:277 hw_breakpoint_handler+0x124/0x230 ... After patch: It records data properly. $ sudo ./perf report --stdio ... # Samples: 36 of event 'mem:0xc00000000116998c' # Event count (approx.): 36 # # Overhead Command Shared Object Symbol # ........ ............. ................ ............. # 63.89% kdumpctl [kernel.vmlinux] [k] alloc_pid 27.78% opal_errd [kernel.vmlinux] [k] alloc_pid 5.56% kworker/u97:4 [kernel.vmlinux] [k] alloc_pid 2.78% systemd [kernel.vmlinux] [k] alloc_pid > > cheers >
On Tue, 2017-02-14 at 09:16:42 UTC, Ravi Bangoria wrote: > emulate_step() uses a number of underlying kernel functions that were > initially not enabled for LE. This has been rectified since. So, fix > emulate_step() for LE for the corresponding instructions. > > Reported-by: Anton Blanchard <anton@samba.org> > Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com> Series applied to powerpc fixes, thanks. https://git.kernel.org/powerpc/c/e148bd17f48bd17fca2f4f089ec879 cheers
diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c index 846dba2..9c542ec 100644 --- a/arch/powerpc/lib/sstep.c +++ b/arch/powerpc/lib/sstep.c @@ -1799,8 +1799,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr) goto instr_done; case LARX: - if (regs->msr & MSR_LE) - return 0; if (op.ea & (size - 1)) break; /* can't handle misaligned */ if (!address_ok(regs, op.ea, size)) @@ -1823,8 +1821,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr) goto ldst_done; case STCX: - if (regs->msr & MSR_LE) - return 0; if (op.ea & (size - 1)) break; /* can't handle misaligned */ if (!address_ok(regs, op.ea, size)) @@ -1849,8 +1845,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr) goto ldst_done; case LOAD: - if (regs->msr & MSR_LE) - return 0; err = read_mem(®s->gpr[op.reg], op.ea, size, regs); if (!err) { if (op.type & SIGNEXT) @@ -1862,8 +1856,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr) #ifdef CONFIG_PPC_FPU case LOAD_FP: - if (regs->msr & MSR_LE) - return 0; if (size == 4) err = do_fp_load(op.reg, do_lfs, op.ea, size, regs); else @@ -1872,15 +1864,11 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr) #endif #ifdef CONFIG_ALTIVEC case LOAD_VMX: - if (regs->msr & MSR_LE) - return 0; err = do_vec_load(op.reg, do_lvx, op.ea & ~0xfUL, regs); goto ldst_done; #endif #ifdef CONFIG_VSX case LOAD_VSX: - if (regs->msr & MSR_LE) - return 0; err = do_vsx_load(op.reg, do_lxvd2x, op.ea, regs); goto ldst_done; #endif @@ -1903,8 +1891,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr) goto instr_done; case STORE: - if (regs->msr & MSR_LE) - return 0; if ((op.type & UPDATE) && size == sizeof(long) && op.reg == 1 && op.update_reg == 1 && !(regs->msr & MSR_PR) && @@ -1917,8 +1903,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr) #ifdef CONFIG_PPC_FPU case STORE_FP: - if (regs->msr & MSR_LE) - return 0; if (size == 4) err = do_fp_store(op.reg, do_stfs, op.ea, size, regs); else @@ -1927,15 +1911,11 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr) #endif #ifdef CONFIG_ALTIVEC case STORE_VMX: - if (regs->msr & MSR_LE) - return 0; err = do_vec_store(op.reg, do_stvx, op.ea & ~0xfUL, regs); goto ldst_done; #endif #ifdef CONFIG_VSX case STORE_VSX: - if (regs->msr & MSR_LE) - return 0; err = do_vsx_store(op.reg, do_stxvd2x, op.ea, regs); goto ldst_done; #endif
emulate_step() uses a number of underlying kernel functions that were initially not enabled for LE. This has been rectified since. So, fix emulate_step() for LE for the corresponding instructions. Reported-by: Anton Blanchard <anton@samba.org> Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com> --- arch/powerpc/lib/sstep.c | 20 -------------------- 1 file changed, 20 deletions(-)