From patchwork Thu Jul 4 21:26:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1127672 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="VNPJkSgh"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45frfB4GMfz9sNf for ; Fri, 5 Jul 2019 07:27:14 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727566AbfGDV1I (ORCPT ); Thu, 4 Jul 2019 17:27:08 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:43219 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727212AbfGDV1H (ORCPT ); Thu, 4 Jul 2019 17:27:07 -0400 Received: by mail-wr1-f66.google.com with SMTP id p13so7837639wru.10 for ; Thu, 04 Jul 2019 14:27:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=iwT+3s2jEFcW3szOT63BZD0PBUkIwgHyY9EbKLgxRJc=; b=VNPJkSghd7/vN6rRkF06NtuNdsrS+U8ojTW3HBWP3Jrvna4PEvVOPDQWQ/G2YswN2/ VbkfH/1ABUIzwd0QKry6KSJpMKmtn+NpORmAFwnSXG7cf0WvdKLEKMQ8w+rL1dJ4CNy3 YrH4ixXjMvr1lu6mTDDO2o5CeSvDIbPKE8SIlpRBXVJt3NWbIISiCHhNl817c18xwPlt jwRpOeTqpBBP0rafDaDSoBKtVqKYobqQbYblRuMmwzFy1Nc7YpLDwXz1ouzK/c+WScLN 0jrlH0YdON8++4OJm8MutY8HAwWj/khFYag0dlhT8XjtKtAWF7wHV+6V/7Sw0pXfuxft Itqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=iwT+3s2jEFcW3szOT63BZD0PBUkIwgHyY9EbKLgxRJc=; b=qdL6ma2pzNQ7lUokVJ6HXqgUhERPH138DaAvbhShko4idl8aRWqQPPPwHhOXEusVQn 2AIihGYZnEayqYqEGP/Ua7b9rwwAG9Qhjaasl4+0/Rq8KfONCEs+VMJRFBSOR6NYc8DQ rAGBYaCRwJ05RWH+/ygA57QE+1Lb2+moDEtBjxkUVxATmHBEMI0550Ctx+D1w8Vr217H FDP7gIfxEFVoc4ONAAHUB6NFwSKixWimpPexwTjpacMjHiMKcyPTKMeWjd1aLox+7u0f HLCgau1twoUv9rAj0TlinAOTSuvtVApiLCpkdG5tPPcWtTlrXIdvT6J9rO/puWroSVMb rRnQ== X-Gm-Message-State: APjAAAUxdVjaN2O3Ba5hKJGpKYuOSuQAdZaow0womGSUUuI8KtMeTVmh oAS9HgES24/B1QxGeuUN62Yb3w== X-Google-Smtp-Source: APXvYqyCZmvMZhtzPcRkq1frWsIZkMisfmm+A8NfRUBbpo+N01vGkVXTtGKOgeCsbtsC3wuokMUe+g== X-Received: by 2002:a5d:4e50:: with SMTP id r16mr349654wrt.227.1562275625127; Thu, 04 Jul 2019 14:27:05 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id t17sm9716654wrs.45.2019.07.04.14.27.03 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 04 Jul 2019 14:27:04 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: ecree@solarflare.com, naveen.n.rao@linux.vnet.ibm.com, andriin@fb.com, jakub.kicinski@netronome.com, bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [RFC bpf-next 1/8] bpf: introducing list based insn patching infra to core layer Date: Thu, 4 Jul 2019 22:26:44 +0100 Message-Id: <1562275611-31790-2-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1562275611-31790-1-git-send-email-jiong.wang@netronome.com> References: <1562275611-31790-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch introduces list based bpf insn patching infra to bpf core layer which is lower than verification layer. This layer has bpf insn sequence as the solo input, therefore the tasks to be finished during list linerization is: - copy insn - relocate jumps - relocation line info. Suggested-by: Alexei Starovoitov Suggested-by: Edward Cree Signed-off-by: Jiong Wang --- include/linux/filter.h | 25 +++++ kernel/bpf/core.c | 268 +++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 293 insertions(+) diff --git a/include/linux/filter.h b/include/linux/filter.h index 1fe53e7..1fea68c 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -842,6 +842,31 @@ struct bpf_prog *bpf_patch_insn_single(struct bpf_prog *prog, u32 off, const struct bpf_insn *patch, u32 len); int bpf_remove_insns(struct bpf_prog *prog, u32 off, u32 cnt); +int bpf_jit_adj_imm_off(struct bpf_insn *insn, int old_idx, int new_idx, + int idx_map[]); + +#define LIST_INSN_FLAG_PATCHED 0x1 +#define LIST_INSN_FLAG_REMOVED 0x2 +struct bpf_list_insn { + struct bpf_insn insn; + struct bpf_list_insn *next; + s32 orig_idx; + u32 flag; +}; + +struct bpf_list_insn *bpf_create_list_insn(struct bpf_prog *prog); +void bpf_destroy_list_insn(struct bpf_list_insn *list); +/* Replace LIST_INSN with new list insns generated from PATCH. */ +struct bpf_list_insn *bpf_patch_list_insn(struct bpf_list_insn *list_insn, + const struct bpf_insn *patch, + u32 len); +/* Pre-patch list_insn with insns inside PATCH, meaning LIST_INSN is not + * touched. New list insns are inserted before it. + */ +struct bpf_list_insn *bpf_prepatch_list_insn(struct bpf_list_insn *list_insn, + const struct bpf_insn *patch, + u32 len); + void bpf_clear_redirect_map(struct bpf_map *map); static inline bool xdp_return_frame_no_direct(void) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index e2c1b43..e60703e 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -502,6 +502,274 @@ int bpf_remove_insns(struct bpf_prog *prog, u32 off, u32 cnt) return WARN_ON_ONCE(bpf_adj_branches(prog, off, off + cnt, off, false)); } +int bpf_jit_adj_imm_off(struct bpf_insn *insn, int old_idx, int new_idx, + s32 idx_map[]) +{ + u8 code = insn->code; + s64 imm; + s32 off; + + if (BPF_CLASS(code) != BPF_JMP && BPF_CLASS(code) != BPF_JMP32) + return 0; + + if (BPF_CLASS(code) == BPF_JMP && + (BPF_OP(code) == BPF_EXIT || + (BPF_OP(code) == BPF_CALL && insn->src_reg != BPF_PSEUDO_CALL))) + return 0; + + /* BPF to BPF call. */ + if (BPF_OP(code) == BPF_CALL) { + imm = idx_map[old_idx + insn->imm + 1] - new_idx - 1; + if (imm < S32_MIN || imm > S32_MAX) + return -ERANGE; + insn->imm = imm; + return 1; + } + + /* Jump. */ + off = idx_map[old_idx + insn->off + 1] - new_idx - 1; + if (off < S16_MIN || off > S16_MAX) + return -ERANGE; + insn->off = off; + return 0; +} + +void bpf_destroy_list_insn(struct bpf_list_insn *list) +{ + struct bpf_list_insn *elem, *next; + + for (elem = list; elem; elem = next) { + next = elem->next; + kvfree(elem); + } +} + +struct bpf_list_insn *bpf_create_list_insn(struct bpf_prog *prog) +{ + unsigned int idx, len = prog->len; + struct bpf_list_insn *hdr, *prev; + struct bpf_insn *insns; + + hdr = kvzalloc(sizeof(*hdr), GFP_KERNEL); + if (!hdr) + return ERR_PTR(-ENOMEM); + + insns = prog->insnsi; + hdr->insn = insns[0]; + hdr->orig_idx = 1; + prev = hdr; + + for (idx = 1; idx < len; idx++) { + struct bpf_list_insn *node = kvzalloc(sizeof(*node), + GFP_KERNEL); + + if (!node) { + /* Destroy what has been allocated. */ + bpf_destroy_list_insn(hdr); + return ERR_PTR(-ENOMEM); + } + node->insn = insns[idx]; + node->orig_idx = idx + 1; + prev->next = node; + prev = node; + } + + return hdr; +} + +/* Linearize bpf list insn to array. */ +static struct bpf_prog *bpf_linearize_list_insn(struct bpf_prog *prog, + struct bpf_list_insn *list) +{ + u32 *idx_map, idx, prev_idx, fini_cnt = 0, orig_cnt = prog->len; + struct bpf_insn *insns, *insn; + struct bpf_list_insn *elem; + + /* Calculate final size. */ + for (elem = list; elem; elem = elem->next) + if (!(elem->flag & LIST_INSN_FLAG_REMOVED)) + fini_cnt++; + + insns = prog->insnsi; + /* If prog length remains same, nothing else to do. */ + if (fini_cnt == orig_cnt) { + for (insn = insns, elem = list; elem; elem = elem->next, insn++) + *insn = elem->insn; + return prog; + } + /* Realloc insn buffer when necessary. */ + if (fini_cnt > orig_cnt) + prog = bpf_prog_realloc(prog, bpf_prog_size(fini_cnt), + GFP_USER); + if (!prog) + return ERR_PTR(-ENOMEM); + insns = prog->insnsi; + prog->len = fini_cnt; + + /* idx_map[OLD_IDX] = NEW_IDX */ + idx_map = kvmalloc(orig_cnt * sizeof(u32), GFP_KERNEL); + if (!idx_map) + return ERR_PTR(-ENOMEM); + memset(idx_map, 0xff, orig_cnt * sizeof(u32)); + + /* Copy over insn + calculate idx_map. */ + for (idx = 0, elem = list; elem; elem = elem->next) { + int orig_idx = elem->orig_idx - 1; + + if (orig_idx >= 0) { + idx_map[orig_idx] = idx; + + if (elem->flag & LIST_INSN_FLAG_REMOVED) + continue; + } + insns[idx++] = elem->insn; + } + + /* Relocate jumps using idx_map. + * old_dst = jmp_insn.old_target + old_pc + 1; + * new_dst = idx_map[old_dst] = jmp_insn.new_target + new_pc + 1; + * jmp_insn.new_target = new_dst - new_pc - 1; + */ + for (idx = 0, prev_idx = 0, elem = list; elem; elem = elem->next) { + int ret, orig_idx; + + /* A removed insn doesn't increase new_pc */ + if (elem->flag & LIST_INSN_FLAG_REMOVED) + continue; + + orig_idx = elem->orig_idx - 1; + ret = bpf_jit_adj_imm_off(&insns[idx], + orig_idx >= 0 ? orig_idx : prev_idx, + idx, idx_map); + idx++; + if (ret < 0) { + kvfree(idx_map); + return ERR_PTR(ret); + } + if (orig_idx >= 0) + /* Record prev_idx. it is used for relocating jump insn + * inside patch buffer. For example, when doing jit + * blinding, a jump could be moved to some other + * positions inside the patch buffer, and its old_dst + * could be calculated using prev_idx. + */ + prev_idx = orig_idx; + } + + /* Adjust linfo. + * + * NOTE: the prog reached core layer has been adjusted to contain insns + * for single function, however linfo contains information for + * whole program, so we need to make sure linfo beyond current + * function is handled properly. + */ + if (prog->aux->nr_linfo) { + u32 linfo_idx, insn_start, insn_end, nr_linfo, idx, delta; + struct bpf_line_info *linfo; + + linfo_idx = prog->aux->linfo_idx; + linfo = &prog->aux->linfo[linfo_idx]; + insn_start = linfo[0].insn_off; + insn_end = insn_start + orig_cnt; + nr_linfo = prog->aux->nr_linfo - linfo_idx; + delta = fini_cnt - orig_cnt; + for (idx = 0; idx < nr_linfo; idx++) { + int adj_off; + + if (linfo[idx].insn_off >= insn_end) { + linfo[idx].insn_off += delta; + continue; + } + + adj_off = linfo[idx].insn_off - insn_start; + linfo[idx].insn_off = idx_map[adj_off] + insn_start; + } + } + kvfree(idx_map); + + return prog; +} + +struct bpf_list_insn *bpf_patch_list_insn(struct bpf_list_insn *list_insn, + const struct bpf_insn *patch, + u32 len) +{ + struct bpf_list_insn *prev, *next; + u32 insn_delta = len - 1; + u32 idx; + + list_insn->insn = *patch; + list_insn->flag |= LIST_INSN_FLAG_PATCHED; + + /* Since our patchlet doesn't expand the image, we're done. */ + if (insn_delta == 0) + return list_insn; + + len--; + patch++; + + prev = list_insn; + next = list_insn->next; + for (idx = 0; idx < len; idx++) { + struct bpf_list_insn *node = kvzalloc(sizeof(*node), + GFP_KERNEL); + + if (!node) { + /* Link what's allocated, so list destroyer could + * free them. + */ + prev->next = next; + return ERR_PTR(-ENOMEM); + } + + node->insn = patch[idx]; + prev->next = node; + prev = node; + } + + prev->next = next; + return prev; +} + +struct bpf_list_insn *bpf_prepatch_list_insn(struct bpf_list_insn *list_insn, + const struct bpf_insn *patch, + u32 len) +{ + struct bpf_list_insn *prev, *node, *begin_node; + u32 idx; + + if (!len) + return list_insn; + + node = kvzalloc(sizeof(*node), GFP_KERNEL); + if (!node) + return ERR_PTR(-ENOMEM); + node->insn = patch[0]; + begin_node = node; + prev = node; + + for (idx = 1; idx < len; idx++) { + node = kvzalloc(sizeof(*node), GFP_KERNEL); + if (!node) { + node = begin_node; + /* Release what's has been allocated. */ + while (node) { + struct bpf_list_insn *next = node->next; + + kvfree(node); + node = next; + } + return ERR_PTR(-ENOMEM); + } + node->insn = patch[idx]; + prev->next = node; + prev = node; + } + + prev->next = list_insn; + return begin_node; +} + void bpf_prog_kallsyms_del_subprogs(struct bpf_prog *fp) { int i; From patchwork Thu Jul 4 21:26:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1127674 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="yYo8w836"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45frfD3qxJz9sNp for ; Fri, 5 Jul 2019 07:27:16 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727581AbfGDV1J (ORCPT ); Thu, 4 Jul 2019 17:27:09 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:35572 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727361AbfGDV1I (ORCPT ); Thu, 4 Jul 2019 17:27:08 -0400 Received: by mail-wr1-f65.google.com with SMTP id c27so7891453wrb.2 for ; Thu, 04 Jul 2019 14:27:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xwMnaUxwwx/VpH4WlxvyV6kBbBg37qwOURMFSuhvxtQ=; b=yYo8w836VjTG14z3azN+zUiSMCSYD2NhI2uECFaEIxzA+b/Khl929Q6AmWcw2eTxBr gi6u8gNyUXs0293rJpsF9YUOi7H5YQuGTYyeiNY6bXFGwfYXhxmBoANgRSLBBkYjXybB t1MqdhcodU8b9OlZkf2MrcPnEfUuUYb83nHMynD+2B8/dTOvXL473WVeLHDu/SoXgN0N nInMCmVVWrTlxsFN9cEGUoyXEpmzJ69Nl1QXTHj8dn0o4rCqI46BgfknaswjwLT1Rg89 M+sampQ01xl52Lxdsln/kyjyYhXFx8kDEZIozD7Qot0E1FWygWsQeFRKKY+E0f9Ex8sP 3VEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xwMnaUxwwx/VpH4WlxvyV6kBbBg37qwOURMFSuhvxtQ=; b=gH9oJH0ih1CpmjcozAEaKMiYFAHtY/BrhvD/KsRh3upUCQvDjX9c4A+CkJFhXCXR/b TIh5fj9yJrIZdO6vs/0QGoE6Hxw09wMxfaKzecYXlcjGzMneIJKDVp8W/YRcr06OZSSW jt18PvkMqZoduviIqc7bbGr8iMFVvPdmgsMWZLSrwbcs2yVajRrDMcgMDr2i9lAdzie9 wshgSgR+cr3mxHBF+NBctCwq8qipKQ3up0Pr8Th08HR0xjWg80QMlmmWfD68tIKca+ml QhqqyMTc6fVx28MyrF5LfmOfhg7b3bB96PaBGDA7xmLTvqNCnKEcrSml3eKv8L+a9lQT XzCw== X-Gm-Message-State: APjAAAXaHqBBYIDKpc6Y0ZwSFxOtyu9/HnMVFw7cSuBQBO1E7cw1x1W7 s88uvlv3NYeXz7kuXFpKeC/Sxg== X-Google-Smtp-Source: APXvYqy/mDpC1fOZQHhba2byXK+sQ/X+siE4P2MxP1OwMJDOFmHji+q7NhHQtLkik4He8lrZ9ZwBSA== X-Received: by 2002:a5d:5752:: with SMTP id q18mr345467wrw.337.1562275626065; Thu, 04 Jul 2019 14:27:06 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id t17sm9716654wrs.45.2019.07.04.14.27.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 04 Jul 2019 14:27:05 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: ecree@solarflare.com, naveen.n.rao@linux.vnet.ibm.com, andriin@fb.com, jakub.kicinski@netronome.com, bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [RFC bpf-next 2/8] bpf: extend list based insn patching infra to verification layer Date: Thu, 4 Jul 2019 22:26:45 +0100 Message-Id: <1562275611-31790-3-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1562275611-31790-1-git-send-email-jiong.wang@netronome.com> References: <1562275611-31790-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Verification layer also needs to handle auxiliar info as well as adjusting subprog start. At this layer, insns inside patch buffer could be jump, but they should have been resolved, meaning they shouldn't jump to insn outside of the patch buffer. Lineration function for this layer won't touch insns inside patch buffer. Adjusting subprog is finished along with adjusting jump target when the input will cover bpf to bpf call insn, re-register subprog start is cheap. But adjustment when there is insn deleteion is not considered yet. Signed-off-by: Jiong Wang --- kernel/bpf/verifier.c | 150 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 150 insertions(+) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index a2e7637..2026d64 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -8350,6 +8350,156 @@ static void opt_hard_wire_dead_code_branches(struct bpf_verifier_env *env) } } +/* Linearize bpf list insn to array (verifier layer). */ +static struct bpf_verifier_env * +verifier_linearize_list_insn(struct bpf_verifier_env *env, + struct bpf_list_insn *list) +{ + u32 *idx_map, idx, orig_cnt, fini_cnt = 0; + struct bpf_subprog_info *new_subinfo; + struct bpf_insn_aux_data *new_data; + struct bpf_prog *prog = env->prog; + struct bpf_verifier_env *ret_env; + struct bpf_insn *insns, *insn; + struct bpf_list_insn *elem; + int ret; + + /* Calculate final size. */ + for (elem = list; elem; elem = elem->next) + if (!(elem->flag & LIST_INSN_FLAG_REMOVED)) + fini_cnt++; + + orig_cnt = prog->len; + insns = prog->insnsi; + /* If prog length remains same, nothing else to do. */ + if (fini_cnt == orig_cnt) { + for (insn = insns, elem = list; elem; elem = elem->next, insn++) + *insn = elem->insn; + return env; + } + /* Realloc insn buffer when necessary. */ + if (fini_cnt > orig_cnt) + prog = bpf_prog_realloc(prog, bpf_prog_size(fini_cnt), + GFP_USER); + if (!prog) + return ERR_PTR(-ENOMEM); + insns = prog->insnsi; + prog->len = fini_cnt; + ret_env = env; + + /* idx_map[OLD_IDX] = NEW_IDX */ + idx_map = kvmalloc(orig_cnt * sizeof(u32), GFP_KERNEL); + if (!idx_map) + return ERR_PTR(-ENOMEM); + memset(idx_map, 0xff, orig_cnt * sizeof(u32)); + + /* Use the same alloc method used when allocating env->insn_aux_data. */ + new_data = vzalloc(array_size(sizeof(*new_data), fini_cnt)); + if (!new_data) { + kvfree(idx_map); + return ERR_PTR(-ENOMEM); + } + + /* Copy over insn + calculate idx_map. */ + for (idx = 0, elem = list; elem; elem = elem->next) { + int orig_idx = elem->orig_idx - 1; + + if (orig_idx >= 0) { + idx_map[orig_idx] = idx; + + if (elem->flag & LIST_INSN_FLAG_REMOVED) + continue; + + new_data[idx] = env->insn_aux_data[orig_idx]; + + if (elem->flag & LIST_INSN_FLAG_PATCHED) + new_data[idx].zext_dst = + insn_has_def32(env, &elem->insn); + } else { + new_data[idx].seen = true; + new_data[idx].zext_dst = insn_has_def32(env, + &elem->insn); + } + insns[idx++] = elem->insn; + } + + new_subinfo = kvzalloc(sizeof(env->subprog_info), GFP_KERNEL); + if (!new_subinfo) { + kvfree(idx_map); + vfree(new_data); + return ERR_PTR(-ENOMEM); + } + memcpy(new_subinfo, env->subprog_info, sizeof(env->subprog_info)); + memset(env->subprog_info, 0, sizeof(env->subprog_info)); + env->subprog_cnt = 0; + env->prog = prog; + ret = add_subprog(env, 0); + if (ret < 0) { + ret_env = ERR_PTR(ret); + goto free_all_ret; + } + /* Relocate jumps using idx_map. + * old_dst = jmp_insn.old_target + old_pc + 1; + * new_dst = idx_map[old_dst] = jmp_insn.new_target + new_pc + 1; + * jmp_insn.new_target = new_dst - new_pc - 1; + */ + for (idx = 0, elem = list; elem; elem = elem->next) { + int orig_idx = elem->orig_idx; + + if (elem->flag & LIST_INSN_FLAG_REMOVED) + continue; + if ((elem->flag & LIST_INSN_FLAG_PATCHED) || !orig_idx) { + idx++; + continue; + } + + ret = bpf_jit_adj_imm_off(&insns[idx], orig_idx - 1, idx, + idx_map); + if (ret < 0) { + ret_env = ERR_PTR(ret); + goto free_all_ret; + } + /* Recalculate subprog start as we are at bpf2bpf call insn. */ + if (ret > 0) { + ret = add_subprog(env, idx + insns[idx].imm + 1); + if (ret < 0) { + ret_env = ERR_PTR(ret); + goto free_all_ret; + } + } + idx++; + } + if (ret < 0) { + ret_env = ERR_PTR(ret); + goto free_all_ret; + } + + env->subprog_info[env->subprog_cnt].start = fini_cnt; + for (idx = 0; idx <= env->subprog_cnt; idx++) + new_subinfo[idx].start = env->subprog_info[idx].start; + memcpy(env->subprog_info, new_subinfo, sizeof(env->subprog_info)); + + /* Adjust linfo. + * FIXME: no support for insn removal at the moment. + */ + if (prog->aux->nr_linfo) { + struct bpf_line_info *linfo = prog->aux->linfo; + u32 nr_linfo = prog->aux->nr_linfo; + + for (idx = 0; idx < nr_linfo; idx++) + linfo[idx].insn_off = idx_map[linfo[idx].insn_off]; + } + vfree(env->insn_aux_data); + env->insn_aux_data = new_data; + goto free_mem_list_ret; +free_all_ret: + vfree(new_data); +free_mem_list_ret: + kvfree(new_subinfo); + kvfree(idx_map); + return ret_env; +} + static int opt_remove_dead_code(struct bpf_verifier_env *env) { struct bpf_insn_aux_data *aux_data = env->insn_aux_data; From patchwork Thu Jul 4 21:26:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1127669 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="rHckrcRi"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45frf83rNsz9sND for ; Fri, 5 Jul 2019 07:27:12 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727212AbfGDV1L (ORCPT ); Thu, 4 Jul 2019 17:27:11 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:46767 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727560AbfGDV1J (ORCPT ); Thu, 4 Jul 2019 17:27:09 -0400 Received: by mail-wr1-f68.google.com with SMTP id z1so3230256wru.13 for ; Thu, 04 Jul 2019 14:27:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=rmj2fYUJnKNsAibTMScIgBH2oZ8fxdXPbaOiBceF+Hk=; b=rHckrcRir8FQOCmDB+8hpO9WLOd/6sDCHrC5GSG2kTguokRGyrIsxWxVV761YdaSMZ hdV9AiTUHvgm0R8gO3en/3DruXhtoAauw/+XVzcAdyeo8rMn5WNDCc4BW9WHeWaZuugc 8jdZNZiVVNImAQrQQk5RT5DNe1U10wmfoEhksqGiP7pLkmt7VoWzI+1wTxW9+BUhbvLO Cv1ldRhygIxRJp/ByPfRBWyd7GLliTKM8iwp7UGbLdcOlA5HEfTTlLX07df9yxnxY61B lrPE4KZ9vp4YpijXmXdxbtRce/WRH7DjpBdxyQ41GUSIhnUTKTAIQAcHQ3H5kpUSCFhy q/cw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=rmj2fYUJnKNsAibTMScIgBH2oZ8fxdXPbaOiBceF+Hk=; b=kqgcRm33M80oySJYbIhTDLhBQga3pvOF2JE0CEhbKGyHN0uxSkLzMydVHFNOFDkuS2 goMDDx4RqwV5JAjiRrwM9q7fIgOSk1/+J9fHy7yXbA/XotIwCxXyDWgZoXqeONexPRv8 K3tizarFipRf1wHsju2Wkgrc7Ocu+y4C/SCOkW6z2M4uDLWh8lGXSk+R8qqh2xuSSr+f FQCnkx2KW8BerYSyCvDH7bcq5n2nggQ6Dx4mcm3HrN//PR7v8EpmFYND90XEKsqnifLW s6Z3lxF7Z1fIoOSbJt1o7shXbvi2miFCd7zGFJepUueMada73+55IM5nlg/40r4LRRnK xP2w== X-Gm-Message-State: APjAAAXLFG8xSuclRRLX62oENZcQ2Em6C145Uh6cfI92KzRUXKGgVIYa 1GXHq7rzxBF6F6rx2MNpvl1k5g== X-Google-Smtp-Source: APXvYqyKK0iKj2++0xhLdyJKPt1GvFmol8NMzEM8kwyggPmFJx/ef5aJ66ekIrY3aMvkeqjgqcNR0g== X-Received: by 2002:a5d:55c2:: with SMTP id i2mr343874wrw.113.1562275626975; Thu, 04 Jul 2019 14:27:06 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id t17sm9716654wrs.45.2019.07.04.14.27.06 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 04 Jul 2019 14:27:06 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: ecree@solarflare.com, naveen.n.rao@linux.vnet.ibm.com, andriin@fb.com, jakub.kicinski@netronome.com, bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [RFC bpf-next 3/8] bpf: migrate jit blinding to list patching infra Date: Thu, 4 Jul 2019 22:26:46 +0100 Message-Id: <1562275611-31790-4-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1562275611-31790-1-git-send-email-jiong.wang@netronome.com> References: <1562275611-31790-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org List linerization function will figure out the new jump destination of patched/blinded jumps. No need of destination adjustment inside bpf_jit_blind_insn any more. Signed-off-by: Jiong Wang --- kernel/bpf/core.c | 76 ++++++++++++++++++++++++++----------------------------- 1 file changed, 36 insertions(+), 40 deletions(-) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index e60703e..c3a5f84 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1162,7 +1162,6 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from, { struct bpf_insn *to = to_buff; u32 imm_rnd = get_random_int(); - s16 off; BUILD_BUG_ON(BPF_REG_AX + 1 != MAX_BPF_JIT_REG); BUILD_BUG_ON(MAX_BPF_REG + 1 != MAX_BPF_JIT_REG); @@ -1234,13 +1233,10 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from, case BPF_JMP | BPF_JSGE | BPF_K: case BPF_JMP | BPF_JSLE | BPF_K: case BPF_JMP | BPF_JSET | BPF_K: - /* Accommodate for extra offset in case of a backjump. */ - off = from->off; - if (off < 0) - off -= 2; *to++ = BPF_ALU64_IMM(BPF_MOV, BPF_REG_AX, imm_rnd ^ from->imm); *to++ = BPF_ALU64_IMM(BPF_XOR, BPF_REG_AX, imm_rnd); - *to++ = BPF_JMP_REG(from->code, from->dst_reg, BPF_REG_AX, off); + *to++ = BPF_JMP_REG(from->code, from->dst_reg, BPF_REG_AX, + from->off); break; case BPF_JMP32 | BPF_JEQ | BPF_K: @@ -1254,14 +1250,10 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from, case BPF_JMP32 | BPF_JSGE | BPF_K: case BPF_JMP32 | BPF_JSLE | BPF_K: case BPF_JMP32 | BPF_JSET | BPF_K: - /* Accommodate for extra offset in case of a backjump. */ - off = from->off; - if (off < 0) - off -= 2; *to++ = BPF_ALU32_IMM(BPF_MOV, BPF_REG_AX, imm_rnd ^ from->imm); *to++ = BPF_ALU32_IMM(BPF_XOR, BPF_REG_AX, imm_rnd); *to++ = BPF_JMP32_REG(from->code, from->dst_reg, BPF_REG_AX, - off); + from->off); break; case BPF_LD | BPF_IMM | BPF_DW: @@ -1332,10 +1324,9 @@ void bpf_jit_prog_release_other(struct bpf_prog *fp, struct bpf_prog *fp_other) struct bpf_prog *bpf_jit_blind_constants(struct bpf_prog *prog) { struct bpf_insn insn_buff[16], aux[2]; - struct bpf_prog *clone, *tmp; - int insn_delta, insn_cnt; - struct bpf_insn *insn; - int i, rewritten; + struct bpf_list_insn *list, *elem; + struct bpf_prog *clone, *ret_prog; + int rewritten; if (!bpf_jit_blinding_enabled(prog) || prog->blinded) return prog; @@ -1344,43 +1335,48 @@ struct bpf_prog *bpf_jit_blind_constants(struct bpf_prog *prog) if (!clone) return ERR_PTR(-ENOMEM); - insn_cnt = clone->len; - insn = clone->insnsi; + list = bpf_create_list_insn(clone); + if (IS_ERR(list)) + return (struct bpf_prog *)list; + + /* kill uninitialized warning on some gcc versions. */ + memset(&aux, 0, sizeof(aux)); + + for (elem = list; elem; elem = elem->next) { + struct bpf_list_insn *next = elem->next; + struct bpf_insn insn = elem->insn; - for (i = 0; i < insn_cnt; i++, insn++) { /* We temporarily need to hold the original ld64 insn * so that we can still access the first part in the * second blinding run. */ - if (insn[0].code == (BPF_LD | BPF_IMM | BPF_DW) && - insn[1].code == 0) - memcpy(aux, insn, sizeof(aux)); + if (insn.code == (BPF_LD | BPF_IMM | BPF_DW)) { + struct bpf_insn next_insn = next->insn; - rewritten = bpf_jit_blind_insn(insn, aux, insn_buff); + if (next_insn.code == 0) { + aux[0] = insn; + aux[1] = next_insn; + } + } + + rewritten = bpf_jit_blind_insn(&insn, aux, insn_buff); if (!rewritten) continue; - tmp = bpf_patch_insn_single(clone, i, insn_buff, rewritten); - if (IS_ERR(tmp)) { - /* Patching may have repointed aux->prog during - * realloc from the original one, so we need to - * fix it up here on error. - */ - bpf_jit_prog_release_other(prog, clone); - return tmp; + elem = bpf_patch_list_insn(elem, insn_buff, rewritten); + if (IS_ERR(elem)) { + ret_prog = (struct bpf_prog *)elem; + goto free_list_ret; } - - clone = tmp; - insn_delta = rewritten - 1; - - /* Walk new program and skip insns we just inserted. */ - insn = clone->insnsi + i + insn_delta; - insn_cnt += insn_delta; - i += insn_delta; } - clone->blinded = 1; - return clone; + clone = bpf_linearize_list_insn(clone, list); + if (!IS_ERR(clone)) + clone->blinded = 1; + ret_prog = clone; +free_list_ret: + bpf_destroy_list_insn(list); + return ret_prog; } #endif /* CONFIG_BPF_JIT */ From patchwork Thu Jul 4 21:26:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1127678 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="BSqXb+OL"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45frfJ5hSjz9sNf for ; Fri, 5 Jul 2019 07:27:20 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727598AbfGDV1L (ORCPT ); Thu, 4 Jul 2019 17:27:11 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:35578 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727570AbfGDV1K (ORCPT ); Thu, 4 Jul 2019 17:27:10 -0400 Received: by mail-wr1-f67.google.com with SMTP id c27so7891495wrb.2 for ; Thu, 04 Jul 2019 14:27:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=c4uexnLqDLKe+JQKp5mzqnabcoGZYMu89NHs0NR4zsI=; b=BSqXb+OL0r9jBPf3HD90kqmQ4HXtFSbw2BuCLQ4BzlPFfmMbpxonKHaOZxv4hp9pF2 khphw7Fs6O0dMKq5eDvNBtHHm2fwuYypC1dQTNmiTzkjWPqiXbGilUnH8dszlIbelvFw TwXrdUo4utQt8z4yMafsf/YzGx9vFfOuLIZYn2A8ctqvBtxkTcrvsJI+kwHk0mWgn+yn U9bRJ1CH5u/rqVcZRXiitZoQ0sjwWRluD2K8gxlZ0kSIZrZGngBuK9+hBtfr7S+XKuh3 OCEAZwM81G5YRzAoYYzetppOPiAhWMFeUuuS6fsjgwLA/NotihcUKUUoBnXb2cFVWr+B wSaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=c4uexnLqDLKe+JQKp5mzqnabcoGZYMu89NHs0NR4zsI=; b=tiHx8xA+S/M+WUxxKfHvJtC/fxT4g0Za5FVXObasQRsFH5lqDqR7s+iN7e2Al2s8Nt 3SvJ+gWaYh8dvwmCJKnC1oE81RqmqDUvwnViz3wvk4O+SL34vudA65WiHS5C28NDDwJP UwiX+6ZoV/K7glg4J6FA5F3VwqY4eAasdxArxiZk5XwBPvpXITqfBUsPBPBB9IzdFOnK WnLBb6Kg6uaBxcxFP8VLsnb0n5bcvQQKRoT/QsjYCM+k7J8RIWdwFGNmkrRppxD9pXLk rk9L1o2VpI+1wS3+zHavYK4s2TR31ET+9ngy9b+jjiMsapAG+tpngLHKu8LK7f7gbvG0 ft1w== X-Gm-Message-State: APjAAAXKf34SX5H9MP4j3fsNOFAZTUAdGEKHdviRIN/CNTF+OIfCzJCA POhPbo0BWDcqVd0XESLQXSR9kQ== X-Google-Smtp-Source: APXvYqwPHwjVm9DhsaBAACKETl9qv1+ndcFfoN4cTiMknKpJGLO6PMWpZTcjBTJsnJYUZHErOQfcoA== X-Received: by 2002:a05:6000:1285:: with SMTP id f5mr324773wrx.315.1562275627849; Thu, 04 Jul 2019 14:27:07 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id t17sm9716654wrs.45.2019.07.04.14.27.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 04 Jul 2019 14:27:07 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: ecree@solarflare.com, naveen.n.rao@linux.vnet.ibm.com, andriin@fb.com, jakub.kicinski@netronome.com, bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [RFC bpf-next 4/8] bpf: migrate convert_ctx_accesses to list patching infra Date: Thu, 4 Jul 2019 22:26:47 +0100 Message-Id: <1562275611-31790-5-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1562275611-31790-1-git-send-email-jiong.wang@netronome.com> References: <1562275611-31790-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch migrate convert_ctx_accesses to new list patching infrastructure. pre-patch is used for generating prologue, because what we really want to do is insert the prog before prog start without touching the first insn. Signed-off-by: Jiong Wang --- kernel/bpf/verifier.c | 98 ++++++++++++++++++++++++++++++--------------------- 1 file changed, 58 insertions(+), 40 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 2026d64..2d16e85 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -8631,41 +8631,59 @@ static int opt_subreg_zext_lo32_rnd_hi32(struct bpf_verifier_env *env, static int convert_ctx_accesses(struct bpf_verifier_env *env) { const struct bpf_verifier_ops *ops = env->ops; - int i, cnt, size, ctx_field_size, delta = 0; - const int insn_cnt = env->prog->len; struct bpf_insn insn_buf[16], *insn; u32 target_size, size_default, off; - struct bpf_prog *new_prog; + struct bpf_list_insn *list, *elem; + int cnt, size, ctx_field_size; enum bpf_access_type type; bool is_narrower_load; + int ret = 0; + + list = bpf_create_list_insn(env->prog); + if (IS_ERR(list)) + return PTR_ERR(list); + elem = list; if (ops->gen_prologue || env->seen_direct_write) { if (!ops->gen_prologue) { verbose(env, "bpf verifier is misconfigured\n"); - return -EINVAL; + ret = -EINVAL; + goto free_list_ret; } cnt = ops->gen_prologue(insn_buf, env->seen_direct_write, env->prog); if (cnt >= ARRAY_SIZE(insn_buf)) { verbose(env, "bpf verifier is misconfigured\n"); - return -EINVAL; + ret = -EINVAL; + goto free_list_ret; } else if (cnt) { - new_prog = bpf_patch_insn_data(env, 0, insn_buf, cnt); - if (!new_prog) - return -ENOMEM; + struct bpf_list_insn *new_hdr; - env->prog = new_prog; - delta += cnt - 1; + /* "gen_prologue" generates patch buffer, we want to use + * pre-patch buffer because we don't want to touch the + * insn/aux at start offset. + */ + new_hdr = bpf_prepatch_list_insn(list, insn_buf, + cnt - 1); + if (IS_ERR(new_hdr)) { + ret = -ENOMEM; + goto free_list_ret; + } + /* Update list head, so new pre-patched nodes could be + * freed by destroyer. + */ + list = new_hdr; } } if (bpf_prog_is_dev_bound(env->prog->aux)) - return 0; + goto linearize_list_ret; - insn = env->prog->insnsi + delta; - - for (i = 0; i < insn_cnt; i++, insn++) { + for (; elem; elem = elem->next) { bpf_convert_ctx_access_t convert_ctx_access; + struct bpf_insn_aux_data *aux; + + insn = &elem->insn; if (insn->code == (BPF_LDX | BPF_MEM | BPF_B) || insn->code == (BPF_LDX | BPF_MEM | BPF_H) || @@ -8680,8 +8698,8 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) else continue; - if (type == BPF_WRITE && - env->insn_aux_data[i + delta].sanitize_stack_off) { + aux = &env->insn_aux_data[elem->orig_idx - 1]; + if (type == BPF_WRITE && aux->sanitize_stack_off) { struct bpf_insn patch[] = { /* Sanitize suspicious stack slot with zero. * There are no memory dependencies for this store, @@ -8689,8 +8707,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) * constant of zero */ BPF_ST_MEM(BPF_DW, BPF_REG_FP, - env->insn_aux_data[i + delta].sanitize_stack_off, - 0), + aux->sanitize_stack_off, 0), /* the original STX instruction will immediately * overwrite the same stack slot with appropriate value */ @@ -8698,17 +8715,15 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) }; cnt = ARRAY_SIZE(patch); - new_prog = bpf_patch_insn_data(env, i + delta, patch, cnt); - if (!new_prog) - return -ENOMEM; - - delta += cnt - 1; - env->prog = new_prog; - insn = new_prog->insnsi + i + delta; + elem = bpf_patch_list_insn(elem, patch, cnt); + if (IS_ERR(elem)) { + ret = PTR_ERR(elem); + goto free_list_ret; + } continue; } - switch (env->insn_aux_data[i + delta].ptr_type) { + switch (aux->ptr_type) { case PTR_TO_CTX: if (!ops->convert_ctx_access) continue; @@ -8728,7 +8743,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) continue; } - ctx_field_size = env->insn_aux_data[i + delta].ctx_field_size; + ctx_field_size = aux->ctx_field_size; size = BPF_LDST_BYTES(insn); /* If the read access is a narrower load of the field, @@ -8744,7 +8759,8 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) if (type == BPF_WRITE) { verbose(env, "bpf verifier narrow ctx access misconfigured\n"); - return -EINVAL; + ret = -EINVAL; + goto free_list_ret; } size_code = BPF_H; @@ -8763,7 +8779,8 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) if (cnt == 0 || cnt >= ARRAY_SIZE(insn_buf) || (ctx_field_size && !target_size)) { verbose(env, "bpf verifier is misconfigured\n"); - return -EINVAL; + ret = -EINVAL; + goto free_list_ret; } if (is_narrower_load && size < target_size) { @@ -8786,18 +8803,19 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) } } - new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt); - if (!new_prog) - return -ENOMEM; - - delta += cnt - 1; - - /* keep walking new program and skip insns we just inserted */ - env->prog = new_prog; - insn = new_prog->insnsi + i + delta; + elem = bpf_patch_list_insn(elem, insn_buf, cnt); + if (IS_ERR(elem)) { + ret = PTR_ERR(elem); + goto free_list_ret; + } } - - return 0; +linearize_list_ret: + env = verifier_linearize_list_insn(env, list); + if (IS_ERR(env)) + ret = PTR_ERR(env); +free_list_ret: + bpf_destroy_list_insn(list); + return ret; } static int jit_subprogs(struct bpf_verifier_env *env) From patchwork Thu Jul 4 21:26:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1127681 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="XD7yk87y"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45frfL1WBkz9sND for ; Fri, 5 Jul 2019 07:27:22 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727560AbfGDV1L (ORCPT ); Thu, 4 Jul 2019 17:27:11 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:53644 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727582AbfGDV1K (ORCPT ); Thu, 4 Jul 2019 17:27:10 -0400 Received: by mail-wm1-f68.google.com with SMTP id x15so6880569wmj.3 for ; Thu, 04 Jul 2019 14:27:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=AEpLeUcqpXcmqEfGdxSPrd+Gg6FGmwajQfvh3haODCk=; b=XD7yk87yfC8JqRxuOmleCt7Wh/t5KbVQ2put/snTNJoWSCXfBP3wAiP5gcztHGZVoK rtYOrBezSWkS8tc64qIidUwFxW+1akizueuMhZxttnA6Rfq0ycUXC9lvUgNzIpP4GbCa NiQjDUGysmMWnjTyVCf9pEAHhB5Jh9NXiDqp7ZgkSCpGczHn4il4HM+XgLn0b51IBrNC MowTqxcNRlqfOjLAR1dpr/DoP7IW9dXEXx2nldj4jdojl3J9yNi83/1EyurA8XrotZ1D 9VkcTPqV5ozTEU2uegfo0FaNARMWYsdi6xU0guLCoSrCQkHlfj3q4tOONswYsu/uxaV3 p1pA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=AEpLeUcqpXcmqEfGdxSPrd+Gg6FGmwajQfvh3haODCk=; b=fxy4Qq73EOwMsplngO5PO33mbRJ0MnyMoCSjJzTVO9bPBlLtDI43JWjrYDkd1lHwbU U88TczChnBhvRosOchMcH3BrkflsSc4BgQVYqpPg7g3tdgCSwH05h3C+NiAoMrptOWlz Y0QvWNbXL7ZCwufoGf8QRA4q3kX57Q90oay6bFpbehf6lYZDek8yUcjUI4LZhelYIUH4 qQufTaFTjaKDuKmbWT1eTM1dB1cij2DhmAmpA4OB67ae/qYPznjuTJ54QbXNvGyJMmm7 3gACgUtzsp/fOnpl2PzwO5jiQphG6R8mcxRYPbl/p29DfN/mzketmMtzhTwUufZKG49n VY/g== X-Gm-Message-State: APjAAAWXljzeHxqMkPT8EHaks6HnROrHgE1OwYQEBXH8kCuXT3pFEmwf miKrPUt/Tg3FPdtkAF/Z/KzoWw== X-Google-Smtp-Source: APXvYqz4bmp3a8Ge8slGSUbk+fYwTcOvlfWuF73F+IUlOG5U8DuqqgQk42OMQX7w2rmI+VI3rGS1CQ== X-Received: by 2002:a1c:cfc3:: with SMTP id f186mr75965wmg.134.1562275628774; Thu, 04 Jul 2019 14:27:08 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id t17sm9716654wrs.45.2019.07.04.14.27.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 04 Jul 2019 14:27:08 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: ecree@solarflare.com, naveen.n.rao@linux.vnet.ibm.com, andriin@fb.com, jakub.kicinski@netronome.com, bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [RFC bpf-next 5/8] bpf: migrate fixup_bpf_calls to list patching infra Date: Thu, 4 Jul 2019 22:26:48 +0100 Message-Id: <1562275611-31790-6-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1562275611-31790-1-git-send-email-jiong.wang@netronome.com> References: <1562275611-31790-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org This patch migrate fixup_bpf_calls to new list patching infrastructure. Signed-off-by: Jiong Wang --- kernel/bpf/verifier.c | 94 +++++++++++++++++++++++++++------------------------ 1 file changed, 49 insertions(+), 45 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 2d16e85..30ed28e 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -9033,16 +9033,19 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) { struct bpf_prog *prog = env->prog; struct bpf_insn *insn = prog->insnsi; + struct bpf_list_insn *list, *elem; const struct bpf_func_proto *fn; - const int insn_cnt = prog->len; const struct bpf_map_ops *ops; struct bpf_insn_aux_data *aux; struct bpf_insn insn_buf[16]; - struct bpf_prog *new_prog; struct bpf_map *map_ptr; - int i, cnt, delta = 0; + int cnt, ret = 0; - for (i = 0; i < insn_cnt; i++, insn++) { + list = bpf_create_list_insn(env->prog); + if (IS_ERR(list)) + return PTR_ERR(list); + for (elem = list; elem; elem = elem->next) { + insn = &elem->insn; if (insn->code == (BPF_ALU64 | BPF_MOD | BPF_X) || insn->code == (BPF_ALU64 | BPF_DIV | BPF_X) || insn->code == (BPF_ALU | BPF_MOD | BPF_X) || @@ -9073,13 +9076,11 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) cnt = ARRAY_SIZE(mask_and_mod) - (is64 ? 1 : 0); } - new_prog = bpf_patch_insn_data(env, i + delta, patchlet, cnt); - if (!new_prog) - return -ENOMEM; - - delta += cnt - 1; - env->prog = prog = new_prog; - insn = new_prog->insnsi + i + delta; + elem = bpf_patch_list_insn(elem, patchlet, cnt); + if (IS_ERR(elem)) { + ret = PTR_ERR(elem); + goto free_list_ret; + } continue; } @@ -9089,16 +9090,15 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) cnt = env->ops->gen_ld_abs(insn, insn_buf); if (cnt == 0 || cnt >= ARRAY_SIZE(insn_buf)) { verbose(env, "bpf verifier is misconfigured\n"); - return -EINVAL; + ret = -EINVAL; + goto free_list_ret; } - new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt); - if (!new_prog) - return -ENOMEM; - - delta += cnt - 1; - env->prog = prog = new_prog; - insn = new_prog->insnsi + i + delta; + elem = bpf_patch_list_insn(elem, insn_buf, cnt); + if (IS_ERR(elem)) { + ret = PTR_ERR(elem); + goto free_list_ret; + } continue; } @@ -9111,7 +9111,7 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) bool issrc, isneg; u32 off_reg; - aux = &env->insn_aux_data[i + delta]; + aux = &env->insn_aux_data[elem->orig_idx - 1]; if (!aux->alu_state || aux->alu_state == BPF_ALU_NON_POINTER) continue; @@ -9144,13 +9144,12 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) *patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1); cnt = patch - insn_buf; - new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt); - if (!new_prog) - return -ENOMEM; + elem = bpf_patch_list_insn(elem, insn_buf, cnt); + if (IS_ERR(elem)) { + ret = PTR_ERR(elem); + goto free_list_ret; + } - delta += cnt - 1; - env->prog = prog = new_prog; - insn = new_prog->insnsi + i + delta; continue; } @@ -9183,7 +9182,7 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) insn->imm = 0; insn->code = BPF_JMP | BPF_TAIL_CALL; - aux = &env->insn_aux_data[i + delta]; + aux = &env->insn_aux_data[elem->orig_idx - 1]; if (!bpf_map_ptr_unpriv(aux)) continue; @@ -9195,7 +9194,8 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) */ if (bpf_map_ptr_poisoned(aux)) { verbose(env, "tail_call abusing map_ptr\n"); - return -EINVAL; + ret = -EINVAL; + goto free_list_ret; } map_ptr = BPF_MAP_PTR(aux->map_state); @@ -9207,13 +9207,12 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) map)->index_mask); insn_buf[2] = *insn; cnt = 3; - new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt); - if (!new_prog) - return -ENOMEM; + elem = bpf_patch_list_insn(elem, insn_buf, cnt); + if (IS_ERR(elem)) { + ret = PTR_ERR(elem); + goto free_list_ret; + } - delta += cnt - 1; - env->prog = prog = new_prog; - insn = new_prog->insnsi + i + delta; continue; } @@ -9228,7 +9227,7 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) insn->imm == BPF_FUNC_map_push_elem || insn->imm == BPF_FUNC_map_pop_elem || insn->imm == BPF_FUNC_map_peek_elem)) { - aux = &env->insn_aux_data[i + delta]; + aux = &env->insn_aux_data[elem->orig_idx - 1]; if (bpf_map_ptr_poisoned(aux)) goto patch_call_imm; @@ -9239,17 +9238,16 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) cnt = ops->map_gen_lookup(map_ptr, insn_buf); if (cnt == 0 || cnt >= ARRAY_SIZE(insn_buf)) { verbose(env, "bpf verifier is misconfigured\n"); - return -EINVAL; + ret = -EINVAL; + goto free_list_ret; } - new_prog = bpf_patch_insn_data(env, i + delta, - insn_buf, cnt); - if (!new_prog) - return -ENOMEM; + elem = bpf_patch_list_insn(elem, insn_buf, cnt); + if (IS_ERR(elem)) { + ret = PTR_ERR(elem); + goto free_list_ret; + } - delta += cnt - 1; - env->prog = prog = new_prog; - insn = new_prog->insnsi + i + delta; continue; } @@ -9307,12 +9305,18 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) verbose(env, "kernel subsystem misconfigured func %s#%d\n", func_id_name(insn->imm), insn->imm); - return -EFAULT; + ret = -EFAULT; + goto free_list_ret; } insn->imm = fn->func - __bpf_call_base; } - return 0; + env = verifier_linearize_list_insn(env, list); + if (IS_ERR(env)) + ret = PTR_ERR(env); +free_list_ret: + bpf_destroy_list_insn(list); + return ret; } static void free_states(struct bpf_verifier_env *env) From patchwork Thu Jul 4 21:26:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1127679 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="u55oriLW"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45frfJ32lwz9sLt for ; Fri, 5 Jul 2019 07:27:20 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727600AbfGDV1M (ORCPT ); Thu, 4 Jul 2019 17:27:12 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:40264 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727361AbfGDV1L (ORCPT ); Thu, 4 Jul 2019 17:27:11 -0400 Received: by mail-wr1-f66.google.com with SMTP id r1so1535016wrl.7 for ; Thu, 04 Jul 2019 14:27:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=lNkKViCQzxQqkTh3Lu8ktBdzdSShu4QxmbjgltdSKAQ=; b=u55oriLWdCX9+oMM0hunfsAIq9z4H2PWKhdS0GhBrsZ/kkrdW7C75rE4js7jnB8rSL rZyogVCCIiKrESpK85pogg66nYaUPEqQYEW+V+hudEnPMaI76OP7BsqkHASC0C0Vd5Mi HGHzoH9GiaXyrZ86vU8UUiYxeIgSrHjEg/aiNweb1sH/ZC5EbAACTmDx0AoxcByEclKd wMDfJ6zRKGjXpIF6jJ+737cb8Yv2SpbNQ4t9TanioRn//qa1DDnn8DgWO49rvyBkbAFE d1fskITjiNETNzp3xhhqRIsJFR5jt0FLLYInWST5B6yCDmtNcyirHZbL8S5/QYuyP4AF juaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=lNkKViCQzxQqkTh3Lu8ktBdzdSShu4QxmbjgltdSKAQ=; b=fqOkfzC1jp8Juld3zA9bOSK9sNt9Kxf72ehpdSvpp7tpUqaWASKPaCxIXeDOL/x97L oz8RtpPfoNOC5Zn5SOuG2QsYkO45Hi1eOJlK3Oo5py/vgoGvzDAyy9zahhFGnvJUJKCM lkprqcrL2jg/QI8RhcB9nF1ifs/PNzgLNJ5rDot3ror20zp3R6+a4/YeuYJICEjyGJtV Tl0umkE6xr1dfPNzbeKybPCu+xtV5EWRja0eA3VRWo0/EuIdahIDKa2Att3cIo3Nz09c 7OEociS9K+yUKoQnnZ/GEQn/pMFQ740n/Lk5PjSzYQ9jrdBMekaoMBgnzNv2QbGwuRtS b0nQ== X-Gm-Message-State: APjAAAX1IaGcTfuGcm6uCtw1snXVCJluW1iPDb8Y6AU4Y+njlz/fHvZn phqfvO9mZdTV/CxQKvu1eoop3w== X-Google-Smtp-Source: APXvYqzcO4nKDqpGMPxDsUnV0Q6mmNU460CJYUvO6lyPku9R9BzCqGziIJjTH5MtgfAIYRES1q5ssA== X-Received: by 2002:adf:cd8c:: with SMTP id q12mr358745wrj.103.1562275629866; Thu, 04 Jul 2019 14:27:09 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id t17sm9716654wrs.45.2019.07.04.14.27.08 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 04 Jul 2019 14:27:08 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: ecree@solarflare.com, naveen.n.rao@linux.vnet.ibm.com, andriin@fb.com, jakub.kicinski@netronome.com, bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [RFC bpf-next 6/8] bpf: migrate zero extension opt to list patching infra Date: Thu, 4 Jul 2019 22:26:49 +0100 Message-Id: <1562275611-31790-7-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1562275611-31790-1-git-send-email-jiong.wang@netronome.com> References: <1562275611-31790-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org This patch migrate 32-bit zero extension insertion to new list patching infrastructure. Signed-off-by: Jiong Wang --- kernel/bpf/verifier.c | 45 +++++++++++++++++++++++++-------------------- 1 file changed, 25 insertions(+), 20 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 30ed28e..58d6bbe 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -8549,10 +8549,9 @@ static int opt_subreg_zext_lo32_rnd_hi32(struct bpf_verifier_env *env, const union bpf_attr *attr) { struct bpf_insn *patch, zext_patch[2], rnd_hi32_patch[4]; - struct bpf_insn_aux_data *aux = env->insn_aux_data; - int i, patch_len, delta = 0, len = env->prog->len; - struct bpf_insn *insns = env->prog->insnsi; - struct bpf_prog *new_prog; + struct bpf_list_insn *list, *elem; + struct bpf_insn_aux_data *aux; + int patch_len, ret = 0; bool rnd_hi32; rnd_hi32 = attr->prog_flags & BPF_F_TEST_RND_HI32; @@ -8560,12 +8559,16 @@ static int opt_subreg_zext_lo32_rnd_hi32(struct bpf_verifier_env *env, rnd_hi32_patch[1] = BPF_ALU64_IMM(BPF_MOV, BPF_REG_AX, 0); rnd_hi32_patch[2] = BPF_ALU64_IMM(BPF_LSH, BPF_REG_AX, 32); rnd_hi32_patch[3] = BPF_ALU64_REG(BPF_OR, 0, BPF_REG_AX); - for (i = 0; i < len; i++) { - int adj_idx = i + delta; - struct bpf_insn insn; - insn = insns[adj_idx]; - if (!aux[adj_idx].zext_dst) { + list = bpf_create_list_insn(env->prog); + if (IS_ERR(list)) + return PTR_ERR(list); + + for (elem = list; elem; elem = elem->next) { + struct bpf_insn insn = elem->insn; + + aux = &env->insn_aux_data[elem->orig_idx - 1]; + if (!aux->zext_dst) { u8 code, class; u32 imm_rnd; @@ -8584,13 +8587,13 @@ static int opt_subreg_zext_lo32_rnd_hi32(struct bpf_verifier_env *env, if (is_reg64(env, &insn, insn.dst_reg, NULL, DST_OP)) { if (class == BPF_LD && BPF_MODE(code) == BPF_IMM) - i++; + elem = elem->next; continue; } /* ctx load could be transformed into wider load. */ if (class == BPF_LDX && - aux[adj_idx].ptr_type == PTR_TO_CTX) + aux->ptr_type == PTR_TO_CTX) continue; imm_rnd = get_random_int(); @@ -8611,16 +8614,18 @@ static int opt_subreg_zext_lo32_rnd_hi32(struct bpf_verifier_env *env, patch = zext_patch; patch_len = 2; apply_patch_buffer: - new_prog = bpf_patch_insn_data(env, adj_idx, patch, patch_len); - if (!new_prog) - return -ENOMEM; - env->prog = new_prog; - insns = new_prog->insnsi; - aux = env->insn_aux_data; - delta += patch_len - 1; + elem = bpf_patch_list_insn(elem, patch, patch_len); + if (IS_ERR(elem)) { + ret = PTR_ERR(elem); + goto free_list_ret; + } } - - return 0; + env = verifier_linearize_list_insn(env, list); + if (IS_ERR(env)) + ret = PTR_ERR(env); +free_list_ret: + bpf_destroy_list_insn(list); + return ret; } /* convert load instructions that access fields of a context type into a From patchwork Thu Jul 4 21:26:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1127673 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="USLsSejG"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45frfF6mNFz9sLt for ; Fri, 5 Jul 2019 07:27:17 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727604AbfGDV1Q (ORCPT ); Thu, 4 Jul 2019 17:27:16 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:33787 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727597AbfGDV1N (ORCPT ); Thu, 4 Jul 2019 17:27:13 -0400 Received: by mail-wr1-f66.google.com with SMTP id n9so7888623wru.0 for ; Thu, 04 Jul 2019 14:27:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=rBAvqk5Lee9P3/iFEoKbYrpRp6bOz8xfKO0gRnG7T/A=; b=USLsSejGFH1AyitieoHpiakUDgruq8zdcSMHXAYW3oFfV1GCkTnFP95OUEaa7eGN52 efK1b+fqoFvW+M54eOiG9wm1WNXJkReW4WSHGPMcMhGWFO2IQhJDkwS3sgCuby1o9/uL r9hkrfNgBkwOFNvkBRQjw03vEKVZ47wJxUw/At3FbHL9lXXTK72Fw0Dv76umNXUHvwRR vIbqHYh6Q/Gy2o+SMVegPb1vLUrauIEf5irMqmVyxyILZHWUsBuAu5Rj7WZnGRjKu/7I FTQAY4Kt7mPhLI4JoB4MwxErt3zlnt5fIf04GCE+a8yCKHHv6z8Vu87W/BsWQIre0hO2 Y3pQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=rBAvqk5Lee9P3/iFEoKbYrpRp6bOz8xfKO0gRnG7T/A=; b=UX1UcgAV8ferTZzzq2NaTcqKZ1lCGVvCEfE7u9mSutp8wHHBWFMyUMciLMN0/HeOVN w8oe/Tns5CdS0Hcg1NZ+IjNKXbtK5ftuFHlitI/r/26mOEEKTNZuxaqt0s0Db5gZlL5+ a1RCD3o1IUDJqg+5cY7n5MEvZjSE/dAUTcN5oarCmAanajnE84EdoJaMZH3CBpuzfPuI UmhctDc4UkLA4HMWFEJuHMdPrgObKphhXpqO8+eLLjJkge0Bae8EgCyBUPVU4rmylHjy dNjtkFr6H4hEg3/CPOyCz9+SkH2S00hTX83gQtDFK+Kles9dhjLnPA+lG54NGPn3xR+9 MD2Q== X-Gm-Message-State: APjAAAVbTeo6hls6yB7D0NLTTPdkx2RwylAnZTunOyAS8AGZ8MUjFaMn 6qxBRADi3lG0icx4uW+MFSt3Ig== X-Google-Smtp-Source: APXvYqwDNXYvNH5AjI0e1o3E6zlAtQBclKtGaxRxbSYcLYaArr2Rfg+Q6LInHw+FEayzR568IDvj6Q== X-Received: by 2002:adf:ca0f:: with SMTP id o15mr358930wrh.135.1562275630883; Thu, 04 Jul 2019 14:27:10 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id t17sm9716654wrs.45.2019.07.04.14.27.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 04 Jul 2019 14:27:10 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: ecree@solarflare.com, naveen.n.rao@linux.vnet.ibm.com, andriin@fb.com, jakub.kicinski@netronome.com, bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [RFC bpf-next 7/8] bpf: migrate insn remove to list patching infra Date: Thu, 4 Jul 2019 22:26:50 +0100 Message-Id: <1562275611-31790-8-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1562275611-31790-1-git-send-email-jiong.wang@netronome.com> References: <1562275611-31790-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org This patch migrate dead code remove pass to new list patching infrastructure. Signed-off-by: Jiong Wang --- kernel/bpf/verifier.c | 59 +++++++++++++++++---------------------------------- 1 file changed, 19 insertions(+), 40 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 58d6bbe..abe11fd 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -8500,49 +8500,30 @@ verifier_linearize_list_insn(struct bpf_verifier_env *env, return ret_env; } -static int opt_remove_dead_code(struct bpf_verifier_env *env) +static int opt_remove_useless_code(struct bpf_verifier_env *env) { - struct bpf_insn_aux_data *aux_data = env->insn_aux_data; - int insn_cnt = env->prog->len; - int i, err; - - for (i = 0; i < insn_cnt; i++) { - int j; - - j = 0; - while (i + j < insn_cnt && !aux_data[i + j].seen) - j++; - if (!j) - continue; - - err = verifier_remove_insns(env, i, j); - if (err) - return err; - insn_cnt = env->prog->len; - } - - return 0; -} - -static int opt_remove_nops(struct bpf_verifier_env *env) -{ - const struct bpf_insn ja = BPF_JMP_IMM(BPF_JA, 0, 0, 0); - struct bpf_insn *insn = env->prog->insnsi; - int insn_cnt = env->prog->len; - int i, err; + struct bpf_insn_aux_data *auxs = env->insn_aux_data; + const struct bpf_insn nop = + BPF_JMP_IMM(BPF_JA, 0, 0, 0); + struct bpf_list_insn *list, *elem; + int ret = 0; - for (i = 0; i < insn_cnt; i++) { - if (memcmp(&insn[i], &ja, sizeof(ja))) + list = bpf_create_list_insn(env->prog); + if (IS_ERR(list)) + return PTR_ERR(list); + for (elem = list; elem; elem = elem->next) { + if (auxs[elem->orig_idx - 1].seen && + memcmp(&elem->insn, &nop, sizeof(nop))) continue; - err = verifier_remove_insns(env, i, 1); - if (err) - return err; - insn_cnt--; - i--; + elem->flag |= LIST_INSN_FLAG_REMOVED; } - return 0; + env = verifier_linearize_list_insn(env, list); + if (IS_ERR(env)) + ret = PTR_ERR(env); + bpf_destroy_list_insn(list); + return ret; } static int opt_subreg_zext_lo32_rnd_hi32(struct bpf_verifier_env *env, @@ -9488,9 +9469,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, if (ret == 0) opt_hard_wire_dead_code_branches(env); if (ret == 0) - ret = opt_remove_dead_code(env); - if (ret == 0) - ret = opt_remove_nops(env); + ret = opt_remove_useless_code(env); } else { if (ret == 0) sanitize_dead_code(env); From patchwork Thu Jul 4 21:26:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1127676 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="gsAtbUIZ"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45frfH3hLVz9s3l for ; Fri, 5 Jul 2019 07:27:19 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727595AbfGDV1Q (ORCPT ); Thu, 4 Jul 2019 17:27:16 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:40270 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727604AbfGDV1O (ORCPT ); Thu, 4 Jul 2019 17:27:14 -0400 Received: by mail-wr1-f66.google.com with SMTP id r1so1535098wrl.7 for ; Thu, 04 Jul 2019 14:27:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=MgCwXPRFSdmtLovFM30zfgjSPm0tfzS0HM17EifhOcg=; b=gsAtbUIZKvtbhgxUN3mY4VQObKJdnsSIyYaiMz2ETASViXT/rtU3L3G6PhYA350us9 rIUJXSeDDdkIF/iZv6ZhvXB8Q3cZVbCsIRE8059t/OT/ROkXOSqn+uKUBcIMSAk8pXtq fKjIlrTwkrbYFekU18gIMzeBgFzYJn3adm6PYpx3UQiacoruZhjjXG16d2NZXUJbnFKi 5ip5+gKhb4ZNnz4ZBq//rcgri+hKwQLtbupD+fUdSHcnKsUzbSXfQxuOu5WXdkepcfiG Fz7FGFU1JQ1w8u5h94z2ZUus4QlcacjQpQh1qY5pyUbrc0EkQCs7OH+J8PCwKJXrm+Vu olvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=MgCwXPRFSdmtLovFM30zfgjSPm0tfzS0HM17EifhOcg=; b=cHAcechq3GeWcfTlubGdQYx7ZFPU6Ozo+uwk1iR/VzSS+wTa1dgFMKvEdvUUreh5D+ qTDygSX+WPjeoy1+/jeBvWKsDKHnCAQ8xX6cZ9+ITBwPCzaZCNn75thqsANYAZu87A5R KP8ZautSwNQquhJ31PqHh9KKilir/eR8wimdlUQFYf6VfMdj26q8yJgtbA0xScLb8uuz 8osqCPS1EZWMVvWZOD7mJaWw1+r/gBcxGzhK4CmmOvDz3rsc8Sj8ohPsZJiWLwzY7ZW9 d0Yn1Vq/+M+i2NBTx4IGDYLMv2EoazbHO1zky/TLieUVWdTQKz6W9xkzPaAkvv5WmVkh RU7Q== X-Gm-Message-State: APjAAAUMXji+rp3p0gotqmOHzYmSiLUu/TmzB7/4xnwfppwvRJsNszi8 IiziLH+us81shD8ZHhWguAOkZg== X-Google-Smtp-Source: APXvYqykWqThgx+O3MabQmVWXMVS0UmjqO62Y4tZeHIofN1fE1ZgLATQGKkHxW7rRe/ejATQt/n9ow== X-Received: by 2002:adf:bc4d:: with SMTP id a13mr352532wrh.296.1562275631752; Thu, 04 Jul 2019 14:27:11 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id t17sm9716654wrs.45.2019.07.04.14.27.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 04 Jul 2019 14:27:11 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: ecree@solarflare.com, naveen.n.rao@linux.vnet.ibm.com, andriin@fb.com, jakub.kicinski@netronome.com, bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [RFC bpf-next 8/8] bpf: delete all those code around old insn patching infrastructure Date: Thu, 4 Jul 2019 22:26:51 +0100 Message-Id: <1562275611-31790-9-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1562275611-31790-1-git-send-email-jiong.wang@netronome.com> References: <1562275611-31790-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org This patch delete all code around old insn patching infrastructure. Signed-off-by: Jiong Wang --- include/linux/bpf_verifier.h | 1 - include/linux/filter.h | 4 - kernel/bpf/core.c | 169 --------------------------------- kernel/bpf/verifier.c | 221 +------------------------------------------ 4 files changed, 1 insertion(+), 394 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 5fe99f3..79c1733 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -305,7 +305,6 @@ struct bpf_insn_aux_data { bool zext_dst; /* this insn zero extends dst reg */ u8 alu_state; /* used in combination with alu_limit */ bool prune_point; - unsigned int orig_idx; /* original instruction index */ }; #define MAX_USED_MAPS 64 /* max number of maps accessed by one eBPF program */ diff --git a/include/linux/filter.h b/include/linux/filter.h index 1fea68c..fcfe0b0 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -838,10 +838,6 @@ static inline bool bpf_dump_raw_ok(void) return kallsyms_show_value() == 1; } -struct bpf_prog *bpf_patch_insn_single(struct bpf_prog *prog, u32 off, - const struct bpf_insn *patch, u32 len); -int bpf_remove_insns(struct bpf_prog *prog, u32 off, u32 cnt); - int bpf_jit_adj_imm_off(struct bpf_insn *insn, int old_idx, int new_idx, int idx_map[]); diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index c3a5f84..716220b 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -333,175 +333,6 @@ int bpf_prog_calc_tag(struct bpf_prog *fp) return 0; } -static int bpf_adj_delta_to_imm(struct bpf_insn *insn, u32 pos, s32 end_old, - s32 end_new, s32 curr, const bool probe_pass) -{ - const s64 imm_min = S32_MIN, imm_max = S32_MAX; - s32 delta = end_new - end_old; - s64 imm = insn->imm; - - if (curr < pos && curr + imm + 1 >= end_old) - imm += delta; - else if (curr >= end_new && curr + imm + 1 < end_new) - imm -= delta; - if (imm < imm_min || imm > imm_max) - return -ERANGE; - if (!probe_pass) - insn->imm = imm; - return 0; -} - -static int bpf_adj_delta_to_off(struct bpf_insn *insn, u32 pos, s32 end_old, - s32 end_new, s32 curr, const bool probe_pass) -{ - const s32 off_min = S16_MIN, off_max = S16_MAX; - s32 delta = end_new - end_old; - s32 off = insn->off; - - if (curr < pos && curr + off + 1 >= end_old) - off += delta; - else if (curr >= end_new && curr + off + 1 < end_new) - off -= delta; - if (off < off_min || off > off_max) - return -ERANGE; - if (!probe_pass) - insn->off = off; - return 0; -} - -static int bpf_adj_branches(struct bpf_prog *prog, u32 pos, s32 end_old, - s32 end_new, const bool probe_pass) -{ - u32 i, insn_cnt = prog->len + (probe_pass ? end_new - end_old : 0); - struct bpf_insn *insn = prog->insnsi; - int ret = 0; - - for (i = 0; i < insn_cnt; i++, insn++) { - u8 code; - - /* In the probing pass we still operate on the original, - * unpatched image in order to check overflows before we - * do any other adjustments. Therefore skip the patchlet. - */ - if (probe_pass && i == pos) { - i = end_new; - insn = prog->insnsi + end_old; - } - code = insn->code; - if ((BPF_CLASS(code) != BPF_JMP && - BPF_CLASS(code) != BPF_JMP32) || - BPF_OP(code) == BPF_EXIT) - continue; - /* Adjust offset of jmps if we cross patch boundaries. */ - if (BPF_OP(code) == BPF_CALL) { - if (insn->src_reg != BPF_PSEUDO_CALL) - continue; - ret = bpf_adj_delta_to_imm(insn, pos, end_old, - end_new, i, probe_pass); - } else { - ret = bpf_adj_delta_to_off(insn, pos, end_old, - end_new, i, probe_pass); - } - if (ret) - break; - } - - return ret; -} - -static void bpf_adj_linfo(struct bpf_prog *prog, u32 off, u32 delta) -{ - struct bpf_line_info *linfo; - u32 i, nr_linfo; - - nr_linfo = prog->aux->nr_linfo; - if (!nr_linfo || !delta) - return; - - linfo = prog->aux->linfo; - - for (i = 0; i < nr_linfo; i++) - if (off < linfo[i].insn_off) - break; - - /* Push all off < linfo[i].insn_off by delta */ - for (; i < nr_linfo; i++) - linfo[i].insn_off += delta; -} - -struct bpf_prog *bpf_patch_insn_single(struct bpf_prog *prog, u32 off, - const struct bpf_insn *patch, u32 len) -{ - u32 insn_adj_cnt, insn_rest, insn_delta = len - 1; - const u32 cnt_max = S16_MAX; - struct bpf_prog *prog_adj; - int err; - - /* Since our patchlet doesn't expand the image, we're done. */ - if (insn_delta == 0) { - memcpy(prog->insnsi + off, patch, sizeof(*patch)); - return prog; - } - - insn_adj_cnt = prog->len + insn_delta; - - /* Reject anything that would potentially let the insn->off - * target overflow when we have excessive program expansions. - * We need to probe here before we do any reallocation where - * we afterwards may not fail anymore. - */ - if (insn_adj_cnt > cnt_max && - (err = bpf_adj_branches(prog, off, off + 1, off + len, true))) - return ERR_PTR(err); - - /* Several new instructions need to be inserted. Make room - * for them. Likely, there's no need for a new allocation as - * last page could have large enough tailroom. - */ - prog_adj = bpf_prog_realloc(prog, bpf_prog_size(insn_adj_cnt), - GFP_USER); - if (!prog_adj) - return ERR_PTR(-ENOMEM); - - prog_adj->len = insn_adj_cnt; - - /* Patching happens in 3 steps: - * - * 1) Move over tail of insnsi from next instruction onwards, - * so we can patch the single target insn with one or more - * new ones (patching is always from 1 to n insns, n > 0). - * 2) Inject new instructions at the target location. - * 3) Adjust branch offsets if necessary. - */ - insn_rest = insn_adj_cnt - off - len; - - memmove(prog_adj->insnsi + off + len, prog_adj->insnsi + off + 1, - sizeof(*patch) * insn_rest); - memcpy(prog_adj->insnsi + off, patch, sizeof(*patch) * len); - - /* We are guaranteed to not fail at this point, otherwise - * the ship has sailed to reverse to the original state. An - * overflow cannot happen at this point. - */ - BUG_ON(bpf_adj_branches(prog_adj, off, off + 1, off + len, false)); - - bpf_adj_linfo(prog_adj, off, insn_delta); - - return prog_adj; -} - -int bpf_remove_insns(struct bpf_prog *prog, u32 off, u32 cnt) -{ - /* Branch offsets can't overflow when program is shrinking, no need - * to call bpf_adj_branches(..., true) here - */ - memmove(prog->insnsi + off, prog->insnsi + off + cnt, - sizeof(struct bpf_insn) * (prog->len - off - cnt)); - prog->len -= cnt; - - return WARN_ON_ONCE(bpf_adj_branches(prog, off, off + cnt, off, false)); -} - int bpf_jit_adj_imm_off(struct bpf_insn *insn, int old_idx, int new_idx, s32 idx_map[]) { diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index abe11fd..9e5618f 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -8067,223 +8067,6 @@ static void convert_pseudo_ld_imm64(struct bpf_verifier_env *env) insn->src_reg = 0; } -/* single env->prog->insni[off] instruction was replaced with the range - * insni[off, off + cnt). Adjust corresponding insn_aux_data by copying - * [0, off) and [off, end) to new locations, so the patched range stays zero - */ -static int adjust_insn_aux_data(struct bpf_verifier_env *env, - struct bpf_prog *new_prog, u32 off, u32 cnt) -{ - struct bpf_insn_aux_data *new_data, *old_data = env->insn_aux_data; - struct bpf_insn *insn = new_prog->insnsi; - u32 prog_len; - int i; - - /* aux info at OFF always needs adjustment, no matter fast path - * (cnt == 1) is taken or not. There is no guarantee INSN at OFF is the - * original insn at old prog. - */ - old_data[off].zext_dst = insn_has_def32(env, insn + off + cnt - 1); - - if (cnt == 1) - return 0; - prog_len = new_prog->len; - new_data = vzalloc(array_size(prog_len, - sizeof(struct bpf_insn_aux_data))); - if (!new_data) - return -ENOMEM; - memcpy(new_data, old_data, sizeof(struct bpf_insn_aux_data) * off); - memcpy(new_data + off + cnt - 1, old_data + off, - sizeof(struct bpf_insn_aux_data) * (prog_len - off - cnt + 1)); - for (i = off; i < off + cnt - 1; i++) { - new_data[i].seen = true; - new_data[i].zext_dst = insn_has_def32(env, insn + i); - } - env->insn_aux_data = new_data; - vfree(old_data); - return 0; -} - -static void adjust_subprog_starts(struct bpf_verifier_env *env, u32 off, u32 len) -{ - int i; - - if (len == 1) - return; - /* NOTE: fake 'exit' subprog should be updated as well. */ - for (i = 0; i <= env->subprog_cnt; i++) { - if (env->subprog_info[i].start <= off) - continue; - env->subprog_info[i].start += len - 1; - } -} - -static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 off, - const struct bpf_insn *patch, u32 len) -{ - struct bpf_prog *new_prog; - - new_prog = bpf_patch_insn_single(env->prog, off, patch, len); - if (IS_ERR(new_prog)) { - if (PTR_ERR(new_prog) == -ERANGE) - verbose(env, - "insn %d cannot be patched due to 16-bit range\n", - env->insn_aux_data[off].orig_idx); - return NULL; - } - if (adjust_insn_aux_data(env, new_prog, off, len)) - return NULL; - adjust_subprog_starts(env, off, len); - return new_prog; -} - -static int adjust_subprog_starts_after_remove(struct bpf_verifier_env *env, - u32 off, u32 cnt) -{ - int i, j; - - /* find first prog starting at or after off (first to remove) */ - for (i = 0; i < env->subprog_cnt; i++) - if (env->subprog_info[i].start >= off) - break; - /* find first prog starting at or after off + cnt (first to stay) */ - for (j = i; j < env->subprog_cnt; j++) - if (env->subprog_info[j].start >= off + cnt) - break; - /* if j doesn't start exactly at off + cnt, we are just removing - * the front of previous prog - */ - if (env->subprog_info[j].start != off + cnt) - j--; - - if (j > i) { - struct bpf_prog_aux *aux = env->prog->aux; - int move; - - /* move fake 'exit' subprog as well */ - move = env->subprog_cnt + 1 - j; - - memmove(env->subprog_info + i, - env->subprog_info + j, - sizeof(*env->subprog_info) * move); - env->subprog_cnt -= j - i; - - /* remove func_info */ - if (aux->func_info) { - move = aux->func_info_cnt - j; - - memmove(aux->func_info + i, - aux->func_info + j, - sizeof(*aux->func_info) * move); - aux->func_info_cnt -= j - i; - /* func_info->insn_off is set after all code rewrites, - * in adjust_btf_func() - no need to adjust - */ - } - } else { - /* convert i from "first prog to remove" to "first to adjust" */ - if (env->subprog_info[i].start == off) - i++; - } - - /* update fake 'exit' subprog as well */ - for (; i <= env->subprog_cnt; i++) - env->subprog_info[i].start -= cnt; - - return 0; -} - -static int bpf_adj_linfo_after_remove(struct bpf_verifier_env *env, u32 off, - u32 cnt) -{ - struct bpf_prog *prog = env->prog; - u32 i, l_off, l_cnt, nr_linfo; - struct bpf_line_info *linfo; - - nr_linfo = prog->aux->nr_linfo; - if (!nr_linfo) - return 0; - - linfo = prog->aux->linfo; - - /* find first line info to remove, count lines to be removed */ - for (i = 0; i < nr_linfo; i++) - if (linfo[i].insn_off >= off) - break; - - l_off = i; - l_cnt = 0; - for (; i < nr_linfo; i++) - if (linfo[i].insn_off < off + cnt) - l_cnt++; - else - break; - - /* First live insn doesn't match first live linfo, it needs to "inherit" - * last removed linfo. prog is already modified, so prog->len == off - * means no live instructions after (tail of the program was removed). - */ - if (prog->len != off && l_cnt && - (i == nr_linfo || linfo[i].insn_off != off + cnt)) { - l_cnt--; - linfo[--i].insn_off = off + cnt; - } - - /* remove the line info which refer to the removed instructions */ - if (l_cnt) { - memmove(linfo + l_off, linfo + i, - sizeof(*linfo) * (nr_linfo - i)); - - prog->aux->nr_linfo -= l_cnt; - nr_linfo = prog->aux->nr_linfo; - } - - /* pull all linfo[i].insn_off >= off + cnt in by cnt */ - for (i = l_off; i < nr_linfo; i++) - linfo[i].insn_off -= cnt; - - /* fix up all subprogs (incl. 'exit') which start >= off */ - for (i = 0; i <= env->subprog_cnt; i++) - if (env->subprog_info[i].linfo_idx > l_off) { - /* program may have started in the removed region but - * may not be fully removed - */ - if (env->subprog_info[i].linfo_idx >= l_off + l_cnt) - env->subprog_info[i].linfo_idx -= l_cnt; - else - env->subprog_info[i].linfo_idx = l_off; - } - - return 0; -} - -static int verifier_remove_insns(struct bpf_verifier_env *env, u32 off, u32 cnt) -{ - struct bpf_insn_aux_data *aux_data = env->insn_aux_data; - unsigned int orig_prog_len = env->prog->len; - int err; - - if (bpf_prog_is_dev_bound(env->prog->aux)) - bpf_prog_offload_remove_insns(env, off, cnt); - - err = bpf_remove_insns(env->prog, off, cnt); - if (err) - return err; - - err = adjust_subprog_starts_after_remove(env, off, cnt); - if (err) - return err; - - err = bpf_adj_linfo_after_remove(env, off, cnt); - if (err) - return err; - - memmove(aux_data + off, aux_data + off + cnt, - sizeof(*aux_data) * (orig_prog_len - off - cnt)); - - return 0; -} - /* The verifier does more data flow analysis than llvm and will not * explore branches that are dead at run time. Malicious programs can * have dead code too. Therefore replace all dead at-run-time code @@ -9365,7 +9148,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, u64 start_time = ktime_get_ns(); struct bpf_verifier_env *env; struct bpf_verifier_log *log; - int i, len, ret = -EINVAL; + int len, ret = -EINVAL; bool is_priv; /* no program is valid */ @@ -9386,8 +9169,6 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, ret = -ENOMEM; if (!env->insn_aux_data) goto err_free_env; - for (i = 0; i < len; i++) - env->insn_aux_data[i].orig_idx = i; env->prog = *prog; env->ops = bpf_verifier_ops[env->prog->type]; is_priv = capable(CAP_SYS_ADMIN);