From patchwork Tue May 31 10:56:30 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anju T Sudhakar X-Patchwork-Id: 628134 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3rJrDD1lJ5z9t5w for ; Tue, 31 May 2016 21:02:20 +1000 (AEST) Received: from ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3rJrDD0pnKzDqtr for ; Tue, 31 May 2016 21:02:20 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3rJr7r0wxdzDqcy for ; Tue, 31 May 2016 20:58:31 +1000 (AEST) Received: from pps.filterd (m0048817.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.11/8.16.0.11) with SMTP id u4VAsiY9009320 for ; Tue, 31 May 2016 06:58:29 -0400 Message-Id: <201605311058.u4VAsiY9009320@mx0a-001b2d01.pphosted.com> Received: from e23smtp03.au.ibm.com (e23smtp03.au.ibm.com [202.81.31.145]) by mx0a-001b2d01.pphosted.com with ESMTP id 2396rkv7gj-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Tue, 31 May 2016 06:58:29 -0400 Received: from localhost by e23smtp03.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 31 May 2016 20:58:25 +1000 Received: from d23dlp01.au.ibm.com (202.81.31.203) by e23smtp03.au.ibm.com (202.81.31.209) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 31 May 2016 20:58:19 +1000 X-IBM-Helo: d23dlp01.au.ibm.com X-IBM-MailFrom: anju@linux.vnet.ibm.com X-IBM-RcptTo: linuxppc-dev@lists.ozlabs.org Received: from d23relay10.au.ibm.com (d23relay10.au.ibm.com [9.190.26.77]) by d23dlp01.au.ibm.com (Postfix) with ESMTP id 2C8782CE806A for ; Tue, 31 May 2016 20:58:08 +1000 (EST) Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139]) by d23relay10.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u4VAvnaj64356444 for ; Tue, 31 May 2016 20:57:57 +1000 Received: from d23av04.au.ibm.com (localhost [127.0.0.1]) by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u4VAuu6q029974 for ; Tue, 31 May 2016 20:56:57 +1000 Received: from localhost.in.ibm.com ([9.79.214.181]) by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id u4VAumc1028885; Tue, 31 May 2016 20:56:54 +1000 From: Anju T To: linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Subject: [RFC PATCH v3 2/3] arch/powerpc : optprobes for powerpc core Date: Tue, 31 May 2016 16:26:30 +0530 X-Mailer: git-send-email 2.1.0 In-Reply-To: <1464692191-1167-1-git-send-email-anju@linux.vnet.ibm.com> References: <1464692191-1167-1-git-send-email-anju@linux.vnet.ibm.com> X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16053110-0008-0000-0000-000000932BA5 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16053110-0009-0000-0000-0000075D4AC6 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2016-05-31_06:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=25 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1604210000 definitions=main-1605310127 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: ananth@in.ibm.com, anjutsudhakar@gmail.com, mahesh@linux.vnet.ibm.com, anju@linux.vnet.ibm.com, paulus@samba.org, mhiramat@kernel.org, naveen.n.rao@linux.vnet.ibm.com, hemant@linux.vnet.ibm.com, srikar@linux.vnet.ibm.com MIME-Version: 1.0 Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" Instructions which can be emulated are suppliants for optimization. Before optimization ensure that the address range between the detour buffer allocated and the instruction being probed is within +/- 32MB. Signed-off-by: Anju T --- arch/powerpc/kernel/optprobes.c | 351 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 351 insertions(+) create mode 100644 arch/powerpc/kernel/optprobes.c diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c new file mode 100644 index 0000000..c4253b6 --- /dev/null +++ b/arch/powerpc/kernel/optprobes.c @@ -0,0 +1,351 @@ +/* + * Code for Kernel probes Jump optimization. + * + * Copyright 2016, Anju T, IBM Corp. + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version + * 2 of the License, or (at your option) any later version. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +DEFINE_INSN_CACHE_OPS(ppc_optinsn) + +#define TMPL_CALL_HDLR_IDX \ + (optprobe_template_call_handler - optprobe_template_entry) +#define TMPL_EMULATE_IDX \ + (optprobe_template_call_emulate - optprobe_template_entry) +#define TMPL_RET_BRANCH_IDX \ + (optprobe_template_ret_branch - optprobe_template_entry) +#define TMPL_RET_IDX \ + (optprobe_template_ret - optprobe_template_entry) +#define TMPL_KP_IDX \ + (optprobe_template_kp_addr - optprobe_template_entry) +#define TMPL_OP1_IDX \ + (optprobe_template_op_address1 - optprobe_template_entry) +#define TMPL_OP2_IDX \ + (optprobe_template_op_address2 - optprobe_template_entry) +#define TMPL_INSN_IDX \ + (optprobe_template_insn - optprobe_template_entry) +#define TMPL_END_IDX \ + (optprobe_template_end - optprobe_template_entry) + +static unsigned long val_nip; + +static void *__ppc_alloc_insn_page(void) +{ + return &optinsn_slot; +} + +static void *__ppc_free_insn_page(void *page __maybe_unused) +{ + return; +} + +struct kprobe_insn_cache kprobe_ppc_optinsn_slots = { + .mutex = __MUTEX_INITIALIZER(kprobe_ppc_optinsn_slots.mutex), + .pages = LIST_HEAD_INIT(kprobe_ppc_optinsn_slots.pages), + /* insn_size initialized later */ + .alloc = __ppc_alloc_insn_page, + .free = __ppc_free_insn_page, + .nr_garbage = 0, +}; + +kprobe_opcode_t *ppc_get_optinsn_slot(struct optimized_kprobe *op) +{ + /* + * The insn slot is allocated from the reserved + * area(ie &optinsn_slot).We are not optimizing probes + * at module_addr now. + */ + kprobe_opcode_t *slot = NULL; + + if (is_kernel_addr(op->kp.addr)) + slot = get_ppc_optinsn_slot(); + return slot; +} + +static void ppc_free_optinsn_slot(struct optimized_kprobe *op) +{ + if (!op->optinsn.insn) + return; + if (is_kernel_addr((unsigned long)op->kp.addr)) + free_ppc_optinsn_slot(op->optinsn.insn, 0); +} + +static void +__arch_remove_optimized_kprobe(struct optimized_kprobe *op, int dirty) +{ + ppc_free_optinsn_slot(op); + op->optinsn.insn = NULL; +} + +static int can_optimize(struct kprobe *p) +{ + struct pt_regs *regs; + unsigned int instr; + int r; + + /* + * Not optimizing the kprobe placed by + * kretprobe during boot time + */ + if ((kprobe_opcode_t)p->addr == (kprobe_opcode_t)&kretprobe_trampoline) + return 0; + + regs = kmalloc(sizeof(*regs), GFP_KERNEL); + if (!regs) + return -ENOMEM; + memset(regs, 0, sizeof(struct pt_regs)); + memcpy(regs, current_pt_regs(), sizeof(struct pt_regs)); + regs->nip = p->addr; + instr = *(p->ainsn.insn); + + /* Ensure the instruction can be emulated*/ + r = emulate_step(regs, instr); + val_nip = regs->nip; + if (r != 1) + return 0; + + return 1; +} + +static void +create_return_branch(struct optimized_kprobe *op, struct pt_regs *regs) +{ + /* + * Create a branch back to the return address + * after the probed instruction is emulated + */ + + kprobe_opcode_t branch, *buff; + unsigned long ret; + + ret = regs->nip; + buff = op->optinsn.insn; + /* + * TODO: For conditional branch instructions, the return + * address may differ in SMP systems.This has to be addressed. + */ + + branch = create_branch((unsigned int *)buff + TMPL_RET_IDX, + (unsigned long)ret, 0); + buff[TMPL_RET_IDX] = branch; + isync(); +} + +static void +optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs) +{ + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); + unsigned long flags; + + local_irq_save(flags); + + if (kprobe_running()) + kprobes_inc_nmissed_count(&op->kp); + else { + __this_cpu_write(current_kprobe, &op->kp); + kcb->kprobe_status = KPROBE_HIT_ACTIVE; + opt_pre_handler(&op->kp, regs); + __this_cpu_write(current_kprobe, NULL); + } + local_irq_restore(flags); +} +NOKPROBE_SYMBOL(optimized_callback); + +void arch_remove_optimized_kprobe(struct optimized_kprobe *op) +{ + __arch_remove_optimized_kprobe(op, 1); +} + +void create_insn(unsigned int insn, kprobe_opcode_t *addr) +{ + u32 instr, instr2; + + /* + * emulate_step() requires insn to be emulated as + * second parameter. Hence r4 should be loaded + * with 'insn'. + * synthesize addis r4,0,(insn)@h + */ + instr = 0x3c000000 | 0x800000 | ((insn >> 16) & 0xffff); + *addr++ = instr; + + /* ori r4,r4,(insn)@l */ + instr2 = 0x60000000 | 0x40000 | 0x800000; + instr2 = instr2 | (insn & 0xffff); + *addr = instr2; +} + +void create_load_address_insn(unsigned long val, kprobe_opcode_t *addr) +{ + u32 instr1, instr2, instr3, instr4, instr5; + /* + * Optimized_kprobe structure is required as a parameter + * for invoking optimized_callback() and create_return_branch() + * from detour buffer. Hence need to have a 64bit immediate + * load into r3. + * + * lis r3,(op)@highest + */ + instr1 = 0x3c000000 | 0x600000 | ((val >> 48) & 0xffff); + *addr++ = instr1; + + /* ori r3,r3,(op)@higher */ + instr2 = 0x60000000 | 0x30000 | 0x600000 | ((val >> 32) & 0xffff); + *addr++ = instr2; + + /* rldicr r3,r3,32,31 */ + instr3 = 0x78000004 | 0x30000 | 0x600000 | ((32 & 0x1f) << 11); + instr3 = instr3 | ((31 & 0x1f) << 6) | ((32 & 0x20) >> 4); + *addr++ = instr3; + + /* oris r3,r3,(op)@h */ + instr4 = 0x64000000 | 0x30000 | 0x600000 | ((val >> 16) & 0xffff); + *addr++ = instr4; + + /* ori r3,r3,(op)@l */ + instr5 = 0x60000000 | 0x30000 | 0x600000 | (val & 0xffff); + *addr = instr5; +} + +int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *p) +{ + kprobe_opcode_t *buff, branch, branch2, branch3; + long rel_chk, ret_chk; + + kprobe_ppc_optinsn_slots.insn_size = MAX_OPTINSN_SIZE; + op->optinsn.insn = NULL; + + if (!can_optimize(p)) + return -EILSEQ; + + /* Allocate instruction slot for detour buffer*/ + buff = ppc_get_optinsn_slot(op); + if (!buff) + return -ENOMEM; + + /* + * OPTPROBE use a 'b' instruction to branch to optinsn.insn. + * + * The target address has to be relatively nearby, to permit use + * of branch instruction in powerpc because the address is specified + * in an immediate field in the instruction opcode itself, ie 24 bits + * in the opcode specify the address. Therefore the address gap should + * be 32MB on either side of the current instruction. + */ + rel_chk = (long)buff - (unsigned long)p->addr; + if (rel_chk < -0x2000000 || rel_chk > 0x1fffffc || rel_chk & 0x3) { + ppc_free_optinsn_slot(op); + return -ERANGE; + } + /* Check the return address is also within 32MB range */ + ret_chk = (long)(buff + TMPL_RET_IDX) - (unsigned long)val_nip; + if (ret_chk < -0x2000000 || ret_chk > 0x1fffffc || ret_chk & 0x3) { + ppc_free_optinsn_slot(op); + return -ERANGE; + } + + /* Do Copy arch specific instance from template*/ + memcpy(buff, optprobe_template_entry, + TMPL_END_IDX * sizeof(kprobe_opcode_t)); + create_load_address_insn((unsigned long)p->addr, buff + TMPL_KP_IDX); + create_load_address_insn((unsigned long)op, buff + TMPL_OP1_IDX); + create_load_address_insn((unsigned long)op, buff + TMPL_OP2_IDX); + + /* Create a branch to the optimized_callback function */ + branch = create_branch((unsigned int *)buff + TMPL_CALL_HDLR_IDX, + (unsigned long)optimized_callback + 8, + BRANCH_SET_LINK); + + /* Place the branch instr into the trampoline */ + buff[TMPL_CALL_HDLR_IDX] = branch; + create_insn(*(p->ainsn.insn), buff + TMPL_INSN_IDX); + + /*Create a branch instruction into the emulate_step*/ + branch3 = create_branch((unsigned int *)buff + TMPL_EMULATE_IDX, + (unsigned long)emulate_step + 8, + BRANCH_SET_LINK); + buff[TMPL_EMULATE_IDX] = branch3; + + /* Create a branch for jumping back*/ + branch2 = create_branch((unsigned int *)buff + TMPL_RET_BRANCH_IDX, + (unsigned long)create_return_branch + 8, + BRANCH_SET_LINK); + buff[TMPL_RET_BRANCH_IDX] = branch2; + + op->optinsn.insn = buff; + smp_mb(); + return 0; +} + +int arch_prepared_optinsn(struct arch_optimized_insn *optinsn) +{ + return optinsn->insn; +} + +/* + * Here,kprobe opt always replace one instruction (4 bytes + * aligned and 4 bytes long). It is impossible to encounter another + * kprobe in the address range. So always return 0. + */ +int arch_check_optimized_kprobe(struct optimized_kprobe *op) +{ + return 0; +} + +void arch_optimize_kprobes(struct list_head *oplist) +{ + struct optimized_kprobe *op; + struct optimized_kprobe *tmp; + + unsigned int branch; + + list_for_each_entry_safe(op, tmp, oplist, list) { + /* + * Backup instructions which will be replaced + *by jump address + */ + memcpy(op->optinsn.copied_insn, op->kp.addr, + RELATIVEJUMP_SIZE); + branch = create_branch((unsigned int *)op->kp.addr, + (unsigned long)op->optinsn.insn, 0); + *op->kp.addr = branch; + list_del_init(&op->list); + } +} + +void arch_unoptimize_kprobe(struct optimized_kprobe *op) +{ + arch_arm_kprobe(&op->kp); +} + +void arch_unoptimize_kprobes(struct list_head *oplist, + struct list_head *done_list) +{ + struct optimized_kprobe *op; + struct optimized_kprobe *tmp; + + list_for_each_entry_safe(op, tmp, oplist, list) { + arch_unoptimize_kprobe(op); + list_move(&op->list, done_list); + } +} + +int arch_within_optimized_kprobe(struct optimized_kprobe *op, + unsigned long addr) +{ + return 0; +}