From patchwork Mon Apr 15 17:26:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1085807 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="AASpbTNo"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44jb5Y1yljz9s9T for ; Tue, 16 Apr 2019 03:26:41 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727941AbfDOR0k (ORCPT ); Mon, 15 Apr 2019 13:26:40 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:38197 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726980AbfDOR0j (ORCPT ); Mon, 15 Apr 2019 13:26:39 -0400 Received: by mail-wr1-f68.google.com with SMTP id k11so23034743wro.5 for ; Mon, 15 Apr 2019 10:26:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Dgi4wERAFj1Xhn1vp6Z/GOnemM6lGF5dz6R1stetD64=; b=AASpbTNoZNQx9lsquKypdk61fkH2ALmb0he+Bs1enTe1yQRt0M35C7LuXq5TTih5+x +uZEB6eG6uJxsTmp/4SMpHWSulPXsusikNTSluEYI85XfJK034MSAWTi4DG2Uk9y4dLT sk5w6TkGqJPCq1IvxbOoU7WEw4ta31xUz8AWl0+xytflQ1eGBF+CAoyqNAX7BSzf6jJd VrA0Vdv8xhQ5AxHpY2zlVqr1uX3nAmgjmljaTiWTGiO99kEEnuyq+gWRVdKSJrdjmo2Q 23D2cnJNExyFsytoKDCSkNrHeS7TcS2x8BTHaDbNJSQj3Li8j97HslrpWpZ8tq9vTdsH 6C1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Dgi4wERAFj1Xhn1vp6Z/GOnemM6lGF5dz6R1stetD64=; b=t4hunSK9t11bnY8fLB1WW7A8Dm85Zv2rcM4MvP5NyMdAKkU8eLova1KBUjFQHhSb8d c5K/ffAcSkyYkhcvgdFI8F4msOG/Pc2IfhZTGn1gF6ztfmY/aywDf26/ThMya49iv8PY Ms6BAqRqKMvMy6UPvEyFYweFdxK9weiwI3suJrelYA1KmUB+fvHYUTeaqw8+9cEWz3JA U4c+IWX1EA4rxRWgzei6b1EHHQux00JTvZEOpgr2U2ROEhGgbQGGvKUeiKNrsQScm/eA Z536n8NbHq7PH6u46arnWCohxjD2ReWEo3vSqhj33JjSUy5DGzXdragEtsSk4DcwDZLR k5Cw== X-Gm-Message-State: APjAAAU5omVpTD8gkXwIYDyMF5kmmBxn9fH7Q9xD+EUWui46SJG4qDrN EN+Smh6a6u8c3rGg4Z34lRH62+nCM6E= X-Google-Smtp-Source: APXvYqxMJ2aXHXt8YDFkY/2UryeMlMNyfmpLGCY2llDxoSTo3Gdfk0XGE6puNCnZiCId3FG0KuEROw== X-Received: by 2002:adf:f68d:: with SMTP id v13mr47226860wrp.6.1555349197158; Mon, 15 Apr 2019 10:26:37 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id v190sm27094232wme.18.2019.04.15.10.26.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Apr 2019 10:26:35 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v4 bpf-next 01/15] bpf: split read liveness into REG_LIVE_READ64 and REG_LIVE_READ32 Date: Mon, 15 Apr 2019 18:26:11 +0100 Message-Id: <1555349185-12508-2-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> References: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org Register liveness infrastructure doesn't track register read width at the moment, while the width information will be needed for the later 32-bit safety analysis pass. This patch take the first step to split read liveness into REG_LIVE_READ64 and REG_LIVE_READ32. Liveness propagation code are updated accordingly. They are taught to understand how to propagate REG_LIVE_READ64 and REG_LIVE_READ32 at the same propagation iteration. For example, "mark_reg_read" now propagate "flags" which could be multiple read bits instead of the single REG_LIVE_READ64. A write still screen off all width of reads. Signed-off-by: Jiong Wang Reviewed-by: Jakub Kicinski --- include/linux/bpf_verifier.h | 8 +-- kernel/bpf/verifier.c | 119 +++++++++++++++++++++++++++++++++++++++---- 2 files changed, 115 insertions(+), 12 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index b3ab61f..fba0ebb 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -36,9 +36,11 @@ */ enum bpf_reg_liveness { REG_LIVE_NONE = 0, /* reg hasn't been read or written this branch */ - REG_LIVE_READ, /* reg was read, so we're sensitive to initial value */ - REG_LIVE_WRITTEN, /* reg was written first, screening off later reads */ - REG_LIVE_DONE = 4, /* liveness won't be updating this register anymore */ + REG_LIVE_READ32 = 0x1, /* reg was read, so we're sensitive to initial value */ + REG_LIVE_READ64 = 0x2, /* likewise, but full 64-bit content matters */ + REG_LIVE_READ = REG_LIVE_READ32 | REG_LIVE_READ64, + REG_LIVE_WRITTEN = 0x4, /* reg was written first, screening off later reads */ + REG_LIVE_DONE = 0x8, /* liveness won't be updating this register anymore */ }; struct bpf_reg_state { diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index c722015..5784b279 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1135,7 +1135,7 @@ static int check_subprogs(struct bpf_verifier_env *env) */ static int mark_reg_read(struct bpf_verifier_env *env, const struct bpf_reg_state *state, - struct bpf_reg_state *parent) + struct bpf_reg_state *parent, u8 flags) { bool writes = parent == state->parent; /* Observe write marks */ int cnt = 0; @@ -1150,17 +1150,23 @@ static int mark_reg_read(struct bpf_verifier_env *env, parent->var_off.value, parent->off); return -EFAULT; } - if (parent->live & REG_LIVE_READ) + /* The first condition is much more likely to be true than the + * second, make it checked first. + */ + if ((parent->live & REG_LIVE_READ) == flags || + parent->live & REG_LIVE_READ64) /* The parentage chain never changes and * this parent was already marked as LIVE_READ. * There is no need to keep walking the chain again and * keep re-marking all parents as LIVE_READ. * This case happens when the same register is read * multiple times without writes into it in-between. + * Also, if parent has REG_LIVE_READ64 set, then no need + * to set the weak REG_LIVE_READ32. */ break; /* ... then we depend on parent's value */ - parent->live |= REG_LIVE_READ; + parent->live |= flags; state = parent; parent = state->parent; writes = true; @@ -1172,12 +1178,95 @@ static int mark_reg_read(struct bpf_verifier_env *env, return 0; } +/* This function is supposed to be used by the following 32-bit optimization + * code only. It returns TRUE if the source or destination register operates + * on 64-bit, otherwise return FALSE. + */ +static bool is_reg64(struct bpf_insn *insn, u32 regno, + struct bpf_reg_state *reg, enum reg_arg_type t) +{ + u8 code, class, op; + + code = insn->code; + class = BPF_CLASS(code); + op = BPF_OP(code); + if (class == BPF_JMP) { + /* BPF_EXIT will reach here because of return value readability + * test for "main" which has s32 return value. + */ + if (op == BPF_EXIT) + return false; + if (op == BPF_CALL) { + /* BPF to BPF call will reach here because of marking + * caller saved clobber with DST_OP_NO_MARK for which we + * don't care the register def because they are anyway + * marked as NOT_INIT already. + */ + if (insn->src_reg == BPF_PSEUDO_CALL) + return false; + /* Helper call will reach here because of arg type + * check. Conservatively marking all args as 64-bit. + */ + return true; + } + } + + if (class == BPF_ALU64 || class == BPF_JMP || + /* BPF_END always use BPF_ALU class. */ + (class == BPF_ALU && op == BPF_END && insn->imm == 64)) + return true; + + if (class == BPF_ALU || class == BPF_JMP32) + return false; + + if (class == BPF_LDX) { + if (t != SRC_OP) + return BPF_SIZE(code) == BPF_DW; + /* LDX source must be ptr. */ + return true; + } + + if (class == BPF_STX) { + if (reg->type != SCALAR_VALUE) + return true; + return BPF_SIZE(code) == BPF_DW; + } + + if (class == BPF_LD) { + u8 mode = BPF_MODE(code); + + /* LD_IMM64 */ + if (mode == BPF_IMM) + return true; + + /* Both LD_IND and LD_ABS return 32-bit data. */ + if (t != SRC_OP) + return false; + + /* Implicit ctx ptr. */ + if (regno == BPF_REG_6) + return true; + + /* Explicit source could be any width. */ + return true; + } + + if (class == BPF_ST) + /* The only source register for BPF_ST is a ptr. */ + return true; + + /* Conservatively return true at default. */ + return true; +} + static int check_reg_arg(struct bpf_verifier_env *env, u32 regno, enum reg_arg_type t) { struct bpf_verifier_state *vstate = env->cur_state; struct bpf_func_state *state = vstate->frame[vstate->curframe]; + struct bpf_insn *insn = env->prog->insnsi + env->insn_idx; struct bpf_reg_state *reg, *regs = state->regs; + bool rw64; if (regno >= MAX_BPF_REG) { verbose(env, "R%d is invalid\n", regno); @@ -1185,6 +1274,7 @@ static int check_reg_arg(struct bpf_verifier_env *env, u32 regno, } reg = ®s[regno]; + rw64 = is_reg64(insn, regno, reg, t); if (t == SRC_OP) { /* check whether register used as source operand can be read */ if (reg->type == NOT_INIT) { @@ -1195,7 +1285,8 @@ static int check_reg_arg(struct bpf_verifier_env *env, u32 regno, if (regno == BPF_REG_FP) return 0; - return mark_reg_read(env, reg, reg->parent); + return mark_reg_read(env, reg, reg->parent, + rw64 ? REG_LIVE_READ64 : REG_LIVE_READ32); } else { /* check whether register used as dest operand can be written to */ if (regno == BPF_REG_FP) { @@ -1382,7 +1473,8 @@ static int check_stack_read(struct bpf_verifier_env *env, state->regs[value_regno].live |= REG_LIVE_WRITTEN; } mark_reg_read(env, ®_state->stack[spi].spilled_ptr, - reg_state->stack[spi].spilled_ptr.parent); + reg_state->stack[spi].spilled_ptr.parent, + REG_LIVE_READ64); return 0; } else { int zeros = 0; @@ -1399,7 +1491,9 @@ static int check_stack_read(struct bpf_verifier_env *env, return -EACCES; } mark_reg_read(env, ®_state->stack[spi].spilled_ptr, - reg_state->stack[spi].spilled_ptr.parent); + reg_state->stack[spi].spilled_ptr.parent, + size == BPF_REG_SIZE + ? REG_LIVE_READ64 : REG_LIVE_READ32); if (value_regno >= 0) { if (zeros == size) { /* any size read into register is zero extended, @@ -2337,7 +2431,9 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno, * the whole slot to be marked as 'read' */ mark_reg_read(env, &state->stack[spi].spilled_ptr, - state->stack[spi].spilled_ptr.parent); + state->stack[spi].spilled_ptr.parent, + access_size == BPF_REG_SIZE + ? REG_LIVE_READ64 : REG_LIVE_READ32); } return update_stack_depth(env, state, min_off); } @@ -6227,12 +6323,17 @@ static int propagate_liveness_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, struct bpf_reg_state *parent_reg) { + u8 parent_bits = parent_reg->live & REG_LIVE_READ; + u8 bits = reg->live & REG_LIVE_READ; + u8 bits_diff = parent_bits ^ bits; + u8 bits_prop = bits_diff & bits; int err; - if (parent_reg->live & REG_LIVE_READ || !(reg->live & REG_LIVE_READ)) + /* No diff bit comes from "reg". */ + if (!bits_prop) return 0; - err = mark_reg_read(env, reg, parent_reg); + err = mark_reg_read(env, reg, parent_reg, bits_prop); if (err) return err; From patchwork Mon Apr 15 17:26:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1085834 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="QKpnv5hG"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44jb6D3Qcjz9ryj for ; Tue, 16 Apr 2019 03:27:16 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728040AbfDOR1O (ORCPT ); Mon, 15 Apr 2019 13:27:14 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:32975 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727937AbfDOR0k (ORCPT ); Mon, 15 Apr 2019 13:26:40 -0400 Received: by mail-wr1-f66.google.com with SMTP id q1so23050941wrp.0 for ; Mon, 15 Apr 2019 10:26:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=YVxWiS0txKksQPu6/HyI4hLBjdspevQZvc7yRcEosdY=; b=QKpnv5hGwbwMKE+yC16kJkCY8PVM+qJpmt7lafBXhNWz+eolD4bO1Fu3/wXwiB8u1+ geznF7JcxRWhJ73hoqEcBtwpR2jyll1FuxDozo7UGNqNsP+qs3N+S4TwAmiJOTfAKcXd XBaVbH6d6CG5/H8Wlz/B3TmhR/EV5iUgE0SwQOXMhNb0XXo0kaCo73Ou3VdDHIAxB4Xh /SCwa2chtrwyePGbrtBPE0ClV6M/DZaVHEQoiqm5DOxCyEANQOm80oITQRc3f0kkIYrT l2vcKjLvf4mOzbn8szveu+PVuh+BpdyJXJrZMaMmSGt1FjiJLfek3BDtbbeFkjrouPXa L7XA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=YVxWiS0txKksQPu6/HyI4hLBjdspevQZvc7yRcEosdY=; b=JtoHSRE+Wo/veGKj449i2enZ7DtIKMsjBYIcptY8ow035wlctpZamFcG7vEBt65Lew LNskTB1dK3HmE0fiTsu13gd/JschcLQbmcsrs+ICfoQ/9MKNaMpQ2HMC0UJaZAv29wVJ PklOtjwJsAP9r8RxnZzIXIF4WHxT+RTfkOv/YAhfac1jk45D48VdZpazYx5pPSMGoNIH ovIaTdDpwKnpUj/NJJ7Fmx7+EiRWVwWWfnsyMCQEHNquHAtIDFR+95a4LAVYV8jJVMNe BIBwYtXTTwIk22XiBAp1Yb1xAa2ZarCpBJgQ9OmakCXszxBtAI3cTdLPrR9bBpK4Z+W/ kl3A== X-Gm-Message-State: APjAAAV3NvXv3ahHqPULpVr/lkNyCTHCOEC6YTjWj7wrCnbSlGp1fkp4 pq43iRIMXBrGzEInapIWjDSb6us4Klc= X-Google-Smtp-Source: APXvYqzeX67Dsj4Vdmr3ouXQe5UCIWjKwhoc1JSOR2rGYB9RbJeDaaKQUEhqQlAOgI033DnYhaQ9eA== X-Received: by 2002:a5d:634c:: with SMTP id b12mr24393717wrw.203.1555349198282; Mon, 15 Apr 2019 10:26:38 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id v190sm27094232wme.18.2019.04.15.10.26.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Apr 2019 10:26:37 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v4 bpf-next 02/15] bpf: mark lo32 writes that should be zero extended into hi32 Date: Mon, 15 Apr 2019 18:26:12 +0100 Message-Id: <1555349185-12508-3-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> References: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org eBPF ISA specification requires high 32-bit cleared when low 32-bit sub-register is written. This applies to destination register of ALU32 etc. JIT back-ends must guarantee this semantic when doing code-gen. x86-64 and arm64 ISA has the same semantic, so the corresponding JIT back-end doesn't need to do extra work. However, 32-bit arches (arm, nfp etc.) and some other 64-bit arches (powerpc, sparc etc), need explicit zero extension sequence to meet such semantic. This is important, because for code the following: u64_value = (u64) u32_value ... other uses of u64_value compiler could exploit the semantic described above and save those zero extensions for extending u32_value to u64_value. Hardware, runtime, or BPF JIT back-ends, are responsible for guaranteeing this. Some benchmarks show ~40% sub-register writes out of total insns, meaning ~40% extra code-gen ( could go up to more for some arches which requires two shifts for zero extension) because JIT back-end needs to do extra code-gen for all such instructions. However this is not always necessary in case u32_value is never cast into a u64, which is quite normal in real life program. So, it would be really good if we could identify those places where such type cast happened, and only do zero extensions for them, not for the others. This could save a lot of BPF code-gen. Algo: - Record indices of instructions that do sub-register def (write). And these indices need to stay with reg state so path pruning and bpf to bpf function call could be handled properly. These indices are kept up to date while doing insn walk. - A full register read on an active sub-register def marks the def insn as needing zero extension on dst register. - A new sub-register write overrides the old one. A new full register write makes the register free of zero extension on dst register. - When propagating read64 during path pruning, also marks def insns whose defs are hanging active sub-register. Reviewed-by: Jakub Kicinski Signed-off-by: Jiong Wang --- include/linux/bpf_verifier.h | 6 ++++++ kernel/bpf/verifier.c | 45 ++++++++++++++++++++++++++++++++++++++++---- 2 files changed, 47 insertions(+), 4 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index fba0ebb..c1923a5 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -133,6 +133,11 @@ struct bpf_reg_state { * pointing to bpf_func_state. */ u32 frameno; + /* Tracks subreg definition. The stored value is the insn_idx of the + * writing insn. This is safe because subreg_def is used before any insn + * patching which only happens after main verification finished. + */ + s32 subreg_def; enum bpf_reg_liveness live; }; @@ -234,6 +239,7 @@ struct bpf_insn_aux_data { int ctx_field_size; /* the ctx field size for load insn, maybe 0 */ int sanitize_stack_off; /* stack slot to be cleared */ bool seen; /* this insn was processed by the verifier */ + bool zext_dst; /* this insn zero extend dst reg */ u8 alu_state; /* used in combination with alu_limit */ unsigned int orig_idx; /* original instruction index */ }; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 5784b279..d5cc167 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -980,6 +980,7 @@ static void mark_reg_not_init(struct bpf_verifier_env *env, __mark_reg_not_init(regs + regno); } +#define DEF_NOT_SUBREG (-1) static void init_reg_state(struct bpf_verifier_env *env, struct bpf_func_state *state) { @@ -990,6 +991,7 @@ static void init_reg_state(struct bpf_verifier_env *env, mark_reg_not_init(env, regs, i); regs[i].live = REG_LIVE_NONE; regs[i].parent = NULL; + regs[i].subreg_def = DEF_NOT_SUBREG; } /* frame pointer */ @@ -1259,6 +1261,19 @@ static bool is_reg64(struct bpf_insn *insn, u32 regno, return true; } +static void mark_insn_zext(struct bpf_verifier_env *env, + struct bpf_reg_state *reg) +{ + s32 def_idx = reg->subreg_def; + + if (def_idx == DEF_NOT_SUBREG) + return; + + env->insn_aux_data[def_idx].zext_dst = true; + /* The dst will be zero extended, so won't be sub-register anymore. */ + reg->subreg_def = DEF_NOT_SUBREG; +} + static int check_reg_arg(struct bpf_verifier_env *env, u32 regno, enum reg_arg_type t) { @@ -1285,6 +1300,9 @@ static int check_reg_arg(struct bpf_verifier_env *env, u32 regno, if (regno == BPF_REG_FP) return 0; + if (rw64) + mark_insn_zext(env, reg); + return mark_reg_read(env, reg, reg->parent, rw64 ? REG_LIVE_READ64 : REG_LIVE_READ32); } else { @@ -1294,6 +1312,7 @@ static int check_reg_arg(struct bpf_verifier_env *env, u32 regno, return -EACCES; } reg->live |= REG_LIVE_WRITTEN; + reg->subreg_def = rw64 ? DEF_NOT_SUBREG : env->insn_idx; if (t == DST_OP) mark_reg_unknown(env, regs, regno); } @@ -2176,6 +2195,12 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn value_regno); if (reg_type_may_be_null(reg_type)) regs[value_regno].id = ++env->id_gen; + /* A load of ctx field could have different + * actual load size with the one encoded in the + * insn. When the dst is PTR, it is for sure not + * a sub-register. + */ + regs[value_regno].subreg_def = DEF_NOT_SUBREG; } regs[value_regno].type = reg_type; } @@ -3376,6 +3401,9 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn check_reg_arg(env, caller_saved[i], DST_OP_NO_MARK); } + /* helper call must return full 64-bit R0. */ + regs[BPF_REG_0].subreg_def = DEF_NOT_SUBREG; + /* update return register (already marked as written above) */ if (fn->ret_type == RET_INTEGER) { /* sets type to SCALAR_VALUE */ @@ -4307,6 +4335,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) */ *dst_reg = *src_reg; dst_reg->live |= REG_LIVE_WRITTEN; + dst_reg->subreg_def = DEF_NOT_SUBREG; } else { /* R1 = (u32) R2 */ if (is_pointer_value(env, insn->src_reg)) { @@ -4317,6 +4346,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) } else if (src_reg->type == SCALAR_VALUE) { *dst_reg = *src_reg; dst_reg->live |= REG_LIVE_WRITTEN; + dst_reg->subreg_def = env->insn_idx; } else { mark_reg_unknown(env, regs, insn->dst_reg); @@ -5380,6 +5410,8 @@ static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn) * Already marked as written above. */ mark_reg_unknown(env, regs, BPF_REG_0); + /* ld_abs load up to 32-bit skb data. */ + regs[BPF_REG_0].subreg_def = env->insn_idx; return 0; } @@ -6319,6 +6351,9 @@ static bool states_equal(struct bpf_verifier_env *env, return true; } +/* Return 0 if no propagation happened. Return negative error code if error + * happened. Otherwise, return the propagated bits. + */ static int propagate_liveness_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, struct bpf_reg_state *parent_reg) @@ -6337,7 +6372,7 @@ static int propagate_liveness_reg(struct bpf_verifier_env *env, if (err) return err; - return 0; + return bits_prop; } /* A write screens off any subsequent reads; but write marks come from the @@ -6371,8 +6406,10 @@ static int propagate_liveness(struct bpf_verifier_env *env, for (i = frame < vstate->curframe ? BPF_REG_6 : 0; i < BPF_REG_FP; i++) { err = propagate_liveness_reg(env, &state_reg[i], &parent_reg[i]); - if (err) + if (err < 0) return err; + if (err & REG_LIVE_READ64) + mark_insn_zext(env, &parent_reg[i]); } /* Propagate stack slots. */ @@ -6382,11 +6419,11 @@ static int propagate_liveness(struct bpf_verifier_env *env, state_reg = &state->stack[i].spilled_ptr; err = propagate_liveness_reg(env, state_reg, parent_reg); - if (err) + if (err < 0) return err; } } - return err; + return 0; } static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) From patchwork Mon Apr 15 17:26:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1085809 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="XylqYUlg"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44jb5d0dmCz9ryj for ; Tue, 16 Apr 2019 03:26:45 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727957AbfDOR0n (ORCPT ); Mon, 15 Apr 2019 13:26:43 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:37095 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727760AbfDOR0l (ORCPT ); Mon, 15 Apr 2019 13:26:41 -0400 Received: by mail-wr1-f68.google.com with SMTP id w10so23048544wrm.4 for ; Mon, 15 Apr 2019 10:26:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=IrCpCr3OCrzb+mQK856rmx6hx+80o08AR4zr9tjTQ5c=; b=XylqYUlg5dUwJxgARyB34Fra+yhteHFLKrfFcL1uRqyOR2iz4WofXQfigvzJuA2d3d v36l1pyG+bgapBWDgQZanOHAnT4ajmQRRwJ7cFZRtqc/ZvF59SkxoPXaX4ef93dFIua/ 04iWRiRoSm3qH0Th0rVKgM62xfTh825Y7ZwkDm6eKWkJU33OBr47iYhR2obLqDvLPSFd BUysvhidrNKwPVf1zwifIzLdIpbdV3z+z+CBkE/RiudQc2Hxo6EDLpypa2r3Fv/r9uWX /jfH/nuN0nhHXTxLN5voYzu7Dq89He3OZHv4Hqd6RzCW9YfB1jVqLtux8GTFK/fprxn5 UKig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=IrCpCr3OCrzb+mQK856rmx6hx+80o08AR4zr9tjTQ5c=; b=HAYqPrwu5fryVA+bsVZnuyEfqWLtZw5HOrhGyPMy91sCGq9ziepiC6c2O0EaxcpOUz PxLVC8IhxveSUB5W5loX1vv9sL1lvj4EJJjt8Ia961eveUd20mw/xu8pgs97HQtByXUx 83HUx4aL4mdW7fgd81bZ1bH0hSvjwZXciFY8/741ce1eQJPV6332GJZhaFjO/lM7nQpL WrtkfCXCadHMdErxD66nDVYelKeLTUn8PMvGYVWlfv1bbiFSjKE0n4BqcRAPzLyFaVvU 1GgEnYGe3CeBnLjdsC9klQviTk/YQPzHu+HeSiozmdKPv4AvZM9h/nNpvB+FQgcIcNx+ c6eA== X-Gm-Message-State: APjAAAVAi4puwj+yayrHSTXK/T8qbcGsSGShfka7EDHYvbVMpDaKRhdW igFtDErHOvDRG9l59Ib/IsOa+w== X-Google-Smtp-Source: APXvYqyYYbKAF+LU8qZH/h5KBuHs8Q93lqcZ05XkY73pKE+KWqDO+8xlp356yHUDHMgE62+hTN0NBw== X-Received: by 2002:a5d:69c7:: with SMTP id s7mr20138647wrw.71.1555349199376; Mon, 15 Apr 2019 10:26:39 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id v190sm27094232wme.18.2019.04.15.10.26.38 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Apr 2019 10:26:38 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v4 bpf-next 03/15] bpf: reduce false alarm by refining helper call arg types Date: Mon, 15 Apr 2019 18:26:13 +0100 Message-Id: <1555349185-12508-4-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> References: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Unlike BPF to BPF function call, BPF helper call is calls to native insns that verifier can't walk. So, verifier needs helper proto type descriptions for data-flow purpose. There is such information already, but it is not differentiate sub-register read with full register read. This patch split "enum bpf_arg_type" for sub-register read, and updated descriptions for several functions that shown frequent usage in one Cilium benchmark. "is_reg64" then taught about these new arg types. Reviewed-by: Jakub Kicinski Signed-off-by: Jiong Wang --- include/linux/bpf.h | 3 +++ kernel/bpf/core.c | 2 +- kernel/bpf/helpers.c | 2 +- kernel/bpf/verifier.c | 61 +++++++++++++++++++++++++++++++++++++++++++-------- net/core/filter.c | 28 +++++++++++------------ 5 files changed, 71 insertions(+), 25 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index f15432d..884b8e1 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -197,9 +197,12 @@ enum bpf_arg_type { ARG_CONST_SIZE, /* number of bytes accessed from memory */ ARG_CONST_SIZE_OR_ZERO, /* number of bytes accessed from memory or 0 */ + ARG_CONST_SIZE32, /* Likewise, but size fits into 32-bit */ + ARG_CONST_SIZE32_OR_ZERO, /* Ditto */ ARG_PTR_TO_CTX, /* pointer to context */ ARG_ANYTHING, /* any (initialized) argument is ok */ + ARG_ANYTHING32, /* Likewise, but it is a 32-bit argument */ ARG_PTR_TO_SPIN_LOCK, /* pointer to bpf_spin_lock */ ARG_PTR_TO_SOCK_COMMON, /* pointer to sock_common */ ARG_PTR_TO_INT, /* pointer to int */ diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index ace8c22..2792eda 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -2067,7 +2067,7 @@ const struct bpf_func_proto bpf_tail_call_proto = { .ret_type = RET_VOID, .arg1_type = ARG_PTR_TO_CTX, .arg2_type = ARG_CONST_MAP_PTR, - .arg3_type = ARG_ANYTHING, + .arg3_type = ARG_ANYTHING32, }; /* Stub for JITs that only support cBPF. eBPF programs are interpreted. diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 4266ffd..039ec8e 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -221,7 +221,7 @@ const struct bpf_func_proto bpf_get_current_comm_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_UNINIT_MEM, - .arg2_type = ARG_CONST_SIZE, + .arg2_type = ARG_CONST_SIZE32, }; #if defined(CONFIG_QUEUED_SPINLOCKS) || defined(CONFIG_BPF_ARCH_SPINLOCK) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index d5cc167..388a583 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1180,12 +1180,47 @@ static int mark_reg_read(struct bpf_verifier_env *env, return 0; } +static bool helper_call_arg64(struct bpf_verifier_env *env, int func_id, + u32 regno) +{ + /* get_func_proto must succeed, other it should have been rejected + * early inside check_helper_call. + */ + const struct bpf_func_proto *fn = + env->ops->get_func_proto(func_id, env->prog); + enum bpf_arg_type arg_type; + + switch (regno) { + case BPF_REG_1: + arg_type = fn->arg1_type; + break; + case BPF_REG_2: + arg_type = fn->arg2_type; + break; + case BPF_REG_3: + arg_type = fn->arg3_type; + break; + case BPF_REG_4: + arg_type = fn->arg4_type; + break; + case BPF_REG_5: + arg_type = fn->arg5_type; + break; + default: + arg_type = ARG_DONTCARE; + } + + return arg_type != ARG_CONST_SIZE32 && + arg_type != ARG_CONST_SIZE32_OR_ZERO && + arg_type != ARG_ANYTHING32; +} + /* This function is supposed to be used by the following 32-bit optimization * code only. It returns TRUE if the source or destination register operates * on 64-bit, otherwise return FALSE. */ -static bool is_reg64(struct bpf_insn *insn, u32 regno, - struct bpf_reg_state *reg, enum reg_arg_type t) +static bool is_reg64(struct bpf_verifier_env *env, struct bpf_insn *insn, + u32 regno, struct bpf_reg_state *reg, enum reg_arg_type t) { u8 code, class, op; @@ -1207,9 +1242,12 @@ static bool is_reg64(struct bpf_insn *insn, u32 regno, if (insn->src_reg == BPF_PSEUDO_CALL) return false; /* Helper call will reach here because of arg type - * check. Conservatively marking all args as 64-bit. + * check. */ - return true; + if (t == SRC_OP) + return helper_call_arg64(env, insn->imm, regno); + + return false; } } @@ -1289,7 +1327,7 @@ static int check_reg_arg(struct bpf_verifier_env *env, u32 regno, } reg = ®s[regno]; - rw64 = is_reg64(insn, regno, reg, t); + rw64 = is_reg64(env, insn, regno, reg, t); if (t == SRC_OP) { /* check whether register used as source operand can be read */ if (reg->type == NOT_INIT) { @@ -2582,7 +2620,9 @@ static bool arg_type_is_mem_ptr(enum bpf_arg_type type) static bool arg_type_is_mem_size(enum bpf_arg_type type) { return type == ARG_CONST_SIZE || - type == ARG_CONST_SIZE_OR_ZERO; + type == ARG_CONST_SIZE_OR_ZERO || + type == ARG_CONST_SIZE32 || + type == ARG_CONST_SIZE32_OR_ZERO; } static bool arg_type_is_int_ptr(enum bpf_arg_type type) @@ -2616,7 +2656,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno, if (err) return err; - if (arg_type == ARG_ANYTHING) { + if (arg_type == ARG_ANYTHING || arg_type == ARG_ANYTHING32) { if (is_pointer_value(env, regno)) { verbose(env, "R%d leaks addr into helper function\n", regno); @@ -2639,7 +2679,9 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno, type != expected_type) goto err_type; } else if (arg_type == ARG_CONST_SIZE || - arg_type == ARG_CONST_SIZE_OR_ZERO) { + arg_type == ARG_CONST_SIZE_OR_ZERO || + arg_type == ARG_CONST_SIZE32 || + arg_type == ARG_CONST_SIZE32_OR_ZERO) { expected_type = SCALAR_VALUE; if (type != expected_type) goto err_type; @@ -2739,7 +2781,8 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno, meta->map_ptr->value_size, false, meta); } else if (arg_type_is_mem_size(arg_type)) { - bool zero_size_allowed = (arg_type == ARG_CONST_SIZE_OR_ZERO); + bool zero_size_allowed = (arg_type == ARG_CONST_SIZE_OR_ZERO || + arg_type == ARG_CONST_SIZE32_OR_ZERO); /* remember the mem_size which may be used later * to refine return values. diff --git a/net/core/filter.c b/net/core/filter.c index 95a27fd..d3c7200 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -1694,9 +1694,9 @@ static const struct bpf_func_proto bpf_skb_store_bytes_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, .arg3_type = ARG_PTR_TO_MEM, - .arg4_type = ARG_CONST_SIZE, + .arg4_type = ARG_CONST_SIZE32, .arg5_type = ARG_ANYTHING, }; @@ -1725,9 +1725,9 @@ static const struct bpf_func_proto bpf_skb_load_bytes_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, .arg3_type = ARG_PTR_TO_UNINIT_MEM, - .arg4_type = ARG_CONST_SIZE, + .arg4_type = ARG_CONST_SIZE32, }; BPF_CALL_5(bpf_skb_load_bytes_relative, const struct sk_buff *, skb, @@ -1876,7 +1876,7 @@ static const struct bpf_func_proto bpf_l3_csum_replace_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, .arg3_type = ARG_ANYTHING, .arg4_type = ARG_ANYTHING, .arg5_type = ARG_ANYTHING, @@ -1929,7 +1929,7 @@ static const struct bpf_func_proto bpf_l4_csum_replace_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, .arg3_type = ARG_ANYTHING, .arg4_type = ARG_ANYTHING, .arg5_type = ARG_ANYTHING, @@ -1968,9 +1968,9 @@ static const struct bpf_func_proto bpf_csum_diff_proto = { .pkt_access = true, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_MEM_OR_NULL, - .arg2_type = ARG_CONST_SIZE_OR_ZERO, + .arg2_type = ARG_CONST_SIZE32_OR_ZERO, .arg3_type = ARG_PTR_TO_MEM_OR_NULL, - .arg4_type = ARG_CONST_SIZE_OR_ZERO, + .arg4_type = ARG_CONST_SIZE32_OR_ZERO, .arg5_type = ARG_ANYTHING, }; @@ -2151,7 +2151,7 @@ static const struct bpf_func_proto bpf_redirect_proto = { .func = bpf_redirect, .gpl_only = false, .ret_type = RET_INTEGER, - .arg1_type = ARG_ANYTHING, + .arg1_type = ARG_ANYTHING32, .arg2_type = ARG_ANYTHING, }; @@ -2929,7 +2929,7 @@ static const struct bpf_func_proto bpf_skb_change_proto_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, .arg3_type = ARG_ANYTHING, }; @@ -2949,7 +2949,7 @@ static const struct bpf_func_proto bpf_skb_change_type_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, }; static u32 bpf_skb_net_base_len(const struct sk_buff *skb) @@ -3241,7 +3241,7 @@ static const struct bpf_func_proto bpf_skb_change_tail_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, .arg3_type = ARG_ANYTHING, }; @@ -3837,7 +3837,7 @@ static const struct bpf_func_proto bpf_skb_get_tunnel_key_proto = { .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, .arg2_type = ARG_PTR_TO_UNINIT_MEM, - .arg3_type = ARG_CONST_SIZE, + .arg3_type = ARG_CONST_SIZE32, .arg4_type = ARG_ANYTHING, }; @@ -3946,7 +3946,7 @@ static const struct bpf_func_proto bpf_skb_set_tunnel_key_proto = { .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, .arg2_type = ARG_PTR_TO_MEM, - .arg3_type = ARG_CONST_SIZE, + .arg3_type = ARG_CONST_SIZE32, .arg4_type = ARG_ANYTHING, }; From patchwork Mon Apr 15 17:26:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1085831 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="qr4cMdKM"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44jb6B0BLQz9s9N for ; Tue, 16 Apr 2019 03:27:14 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727947AbfDOR0o (ORCPT ); Mon, 15 Apr 2019 13:26:44 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:46037 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727823AbfDOR0n (ORCPT ); Mon, 15 Apr 2019 13:26:43 -0400 Received: by mail-wr1-f67.google.com with SMTP id s15so22973537wra.12 for ; Mon, 15 Apr 2019 10:26:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=F3IQdRy82Sws5VQv+L8cpFUINUDPBEmOgKCAbvoUTJY=; b=qr4cMdKML3i5juO46c1WsG7Fv7jE2MWV3oudyjRzhVt/tuPGhhwBNjdW5OKWUEcpXY ZXqT/yTdalJCdcdo6A6d2F33M/ThCithmoYYAh/l/t/rSsrthBdqG7rxoR7OZuFC+yLM Hgj5/iJJbDcPD68oXKL2WwCIuzztD00qfBRIqKbPMWoHYasL/WO3U0AI26/uPGctMyMd 2o6h5z3eq/uEfNrgM+2KXMWvqLGQSEHPcRaAarbn8eYdvOEVOPVvuZQkrdR30ZP1nf+9 HatV549uGv7oNOwR5PvGkgI4dub1x67s/hHNmRjFiHSO1vuEtErp7SlUh2ftmtGHrNcA VfhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=F3IQdRy82Sws5VQv+L8cpFUINUDPBEmOgKCAbvoUTJY=; b=fH1FvurkWKHhCEKg6VflMLFz+wZ6jeFLbtDgreSZwf7SajkHK81XeqeblM+x1XTDvK IMMxoutkq+5iH/dCodWNKrODpXt4p7Bi2QxwypCneqvSCwU8/LdUdeyaWKff9aXNpvAc PZinEBBmuF9tgvtd2oodHa4D9nEyN+moKPRuhdSdb0aQcfOlOVwqDSl85jc+ec54Y01+ ocUD6cKm9JWYtdygO6VqrptFPLG0Qsd8BDFNInzTYMDwze5iuBTp1Ulp42NWrKAoQXzC VB1bRSmYHLjvSz58gK6ftKpFgYKnPXShWerz0aaErUH/RRe6DF4kxLrN+B0fR7D+phOa O7NA== X-Gm-Message-State: APjAAAWZDCWgr2NndIlX90+kQQyAzeLGuKeyISYPqgyCmKWJsrQMB42a uZOG4fNu7bB3Nol/sZh8xCgrQA== X-Google-Smtp-Source: APXvYqwK2mBf+bZVXqSKid7m7jwBiYPll8IGIyVz2n7ZkhlTu2HSuv8lVRz0fTALhw/04MMss3iovg== X-Received: by 2002:adf:ed8f:: with SMTP id c15mr24665450wro.144.1555349200481; Mon, 15 Apr 2019 10:26:40 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id v190sm27094232wme.18.2019.04.15.10.26.39 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Apr 2019 10:26:39 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v4 bpf-next 04/15] bpf: insert explicit zero extension insn when hardware doesn't do it implicitly Date: Mon, 15 Apr 2019 18:26:14 +0100 Message-Id: <1555349185-12508-5-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> References: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org After previous patches, verifier has marked those instructions that really need zero extension on dst_reg. It is then for all back-ends to decide how to use such information to eliminate unnecessary zero extension code-gen during JIT compilation. One approach is: 1. Verifier insert explicit zero extension for those instructions that need zero extension. 2. All JIT back-ends do NOT generate zero extension for sub-register write any more. The good thing for this approach is no major change on JIT back-end interface, all back-ends could get this optimization. However, only those back-ends that do not have hardware zero extension want this optimization. For back-ends like x86_64 and AArch64, there is hardware support, so zext insertion should be disabled. This patch introduces new target hook "bpf_jit_hardware_zext" which is default true, meaning the underlying hardware will do zero extension implicitly, therefore zext insertion by verifier will be disabled. Once a back-end overrides this hook to false, then verifier will insert zext sequence to clear high 32-bit of definitions when necessary. Offload targets do not use this native target hook, instead, they could get the optimization results using bpf_prog_offload_ops.finalize. Reviewed-by: Jakub Kicinski Signed-off-by: Jiong Wang --- include/linux/bpf.h | 1 + include/linux/filter.h | 1 + kernel/bpf/core.c | 8 +++++ kernel/bpf/verifier.c | 87 +++++++++++++++++++++++++++++++++++++++++++++++++- 4 files changed, 96 insertions(+), 1 deletion(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 884b8e1..bdab6e7 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -368,6 +368,7 @@ struct bpf_prog_aux { u32 id; u32 func_cnt; /* used by non-func prog as the number of func progs */ u32 func_idx; /* 0 for non-func prog, the index in func array for func prog */ + bool no_verifier_zext; /* No zero extension insertion by verifier. */ bool offload_requested; struct bpf_prog **func; void *jit_data; /* JIT specific data. arch dependent */ diff --git a/include/linux/filter.h b/include/linux/filter.h index fb0edad..8750657 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -821,6 +821,7 @@ u64 __bpf_call_base(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5); struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog); void bpf_jit_compile(struct bpf_prog *prog); +bool bpf_jit_hardware_zext(void); bool bpf_helper_changes_pkt_data(void *func); static inline bool bpf_dump_raw_ok(void) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 2792eda..1c54274 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -2091,6 +2091,14 @@ bool __weak bpf_helper_changes_pkt_data(void *func) return false; } +/* Return TRUE is the target hardware of JIT will do zero extension to high bits + * when writing to low 32-bit of one register. Otherwise, return FALSE. + */ +bool __weak bpf_jit_hardware_zext(void) +{ + return true; +} + /* To execute LD_ABS/LD_IND instructions __bpf_prog_run() may call * skb_copy_bits(), so provide a weak definition of it for NET-less config. */ diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 388a583..33d7e54 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7555,6 +7555,80 @@ static int opt_remove_nops(struct bpf_verifier_env *env) return 0; } +static int opt_subreg_zext_lo32(struct bpf_verifier_env *env) +{ + struct bpf_insn_aux_data orig_aux, *aux = env->insn_aux_data; + struct bpf_insn *insns = env->prog->insnsi; + int i, delta = 0, len = env->prog->len; + struct bpf_insn zext_patch[3]; + struct bpf_prog *new_prog; + + zext_patch[1] = BPF_ALU64_IMM(BPF_LSH, 0, 32); + zext_patch[2] = BPF_ALU64_IMM(BPF_RSH, 0, 32); + for (i = 0; i < len; i++) { + int adj_idx = i + delta; + struct bpf_insn insn; + + if (!aux[adj_idx].zext_dst) + continue; + + insn = insns[adj_idx]; + /* "adjust_insn_aux_data" only retains the original insn aux + * data if insn at patched offset is at the end of the patch + * buffer. That is to say, given the following insn sequence: + * + * insn 1 + * insn 2 + * insn 3 + * + * if the patch offset is at insn 2, then the patch buffer must + * be the following that original insn aux data can be retained. + * + * {lshift, rshift, insn2} + * + * However, zero extension needs to be inserted after insn2, so + * insn patch buffer needs to be the following: + * + * {insn2, lshift, rshift} + * + * which would cause insn aux data of insn2 lost and that data + * is critical for ctx field load instruction transformed + * correctly later inside "convert_ctx_accesses". + * + * The simplest way to fix this to build the following patch + * buffer: + * + * {lshift, rshift, insn-next-to-insn2} + * + * Given insn2 defines a value, it can't be a JMP, hence there + * must be a next insn for it otherwise CFG check should have + * rejected this program. However, insn-next-to-insn2 could + * be a JMP and verifier insn patch infrastructure doesn't + * support adjust offset for JMP inside patch buffer. We would + * end up with a few insn check and offset adj code outside of + * the generic insn patch helpers if we go with this approach. + * + * Therefore, we still use {insn2, lshift, rshift} as the patch + * buffer, we copy and restore insn aux data for insn2 + * explicitly. The change looks simpler and smaller. + */ + zext_patch[0] = insns[adj_idx]; + zext_patch[1].dst_reg = insn.dst_reg; + zext_patch[2].dst_reg = insn.dst_reg; + memcpy(&orig_aux, &aux[adj_idx], sizeof(orig_aux)); + new_prog = bpf_patch_insn_data(env, adj_idx, zext_patch, 3); + if (!new_prog) + return -ENOMEM; + env->prog = new_prog; + insns = new_prog->insnsi; + aux = env->insn_aux_data; + memcpy(&aux[adj_idx], &orig_aux, sizeof(orig_aux)); + delta += 2; + } + + return 0; +} + /* convert load instructions that access fields of a context type into a * sequence of instructions that access fields of the underlying structure: * struct __sk_buff -> struct sk_buff @@ -8386,7 +8460,18 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, if (ret == 0) ret = check_max_stack_depth(env); - /* instruction rewrites happen after this point */ + /* Instruction rewrites happen after this point. + * For offload target, finalize hook has all aux insn info, do any + * customized work there. + */ + if (ret == 0 && !bpf_jit_hardware_zext() && + !bpf_prog_is_dev_bound(env->prog->aux)) { + ret = opt_subreg_zext_lo32(env); + env->prog->aux->no_verifier_zext = !!ret; + } else { + env->prog->aux->no_verifier_zext = true; + } + if (is_priv) { if (ret == 0) opt_hard_wire_dead_code_branches(env); From patchwork Mon Apr 15 17:26:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1085810 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="Cdw+qbO4"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44jb5d6fw2z9s9N for ; Tue, 16 Apr 2019 03:26:45 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727967AbfDOR0o (ORCPT ); Mon, 15 Apr 2019 13:26:44 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:42333 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727949AbfDOR0o (ORCPT ); Mon, 15 Apr 2019 13:26:44 -0400 Received: by mail-wr1-f66.google.com with SMTP id g3so22983620wrx.9 for ; Mon, 15 Apr 2019 10:26:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=J0YL3VY81ZYkLm7FFiGr50XaGEO0Un8tKpiZShOimgc=; b=Cdw+qbO4RE0N578B3pOOPagsOZ311UAjuZTJVA0bOj6B39ZDNR51HNzXMjzI8Zzjg/ XxGNv1LJGkDo0owsRMOYX0B/aRoQ1kyd9kpjVyAlPRpuDdyEJiZD7gux9pUxEI1BP52V OkQbUo9WPJlME9tWrIe2tiYMmNdETOYYlgoyqr2rohb2uZZqX/1ENGiZg8iJtQRKhYgG AdTfSzIXRKIewUAog81ClX8Hfzy238Lf53owT44ISIZkbwvIQFBC1oThdYlAVQOv5qem gJxg76S+qWHx+JkQb71FRWPoDF4IQ6aCLyEjt1+YinAXzKuH3I0HvwpPlPCaXxQc0PUp VcKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=J0YL3VY81ZYkLm7FFiGr50XaGEO0Un8tKpiZShOimgc=; b=jHNXX+qVDzEvmxcdOGqLuBU1J8kxllWjeRzK1Lad5L2iWPOwLBHz2lN/bsez9FmWji yPlD5AoJK4mjoNp47a9jLFjRnTNnjffUXrPqVHwR814KMP9MbnJxnlmve3aOCEzGEb4x b/RXkODHcseLyyGaAYQANE1jv9xzkpWgDtU4WieZd2LzRgxT0ojiC4s3kMuBauCC0i58 tFw8SjNiW6a9OqmPRXPA0yXVLjpRWVMFPZyEsT1IHOCVrFp7jAEnU0C1obEgkbeFs9Ay L4VGYMkMgU8zhQ6666rkv2pxe3/UG1Q9ldfe8+L6tMdWT7FH6b/LGqW40jAizlQRQpNy 48WQ== X-Gm-Message-State: APjAAAUpfJkNe1uYojksQ0JeSE0DJsbgFSYuEm7PmKIYPe3NJTek3Z+A vaKeDLNqduzFxl0Oe8d+IfNwdA== X-Google-Smtp-Source: APXvYqwa8LY2ekSFo6IfldMAD9owVJJl70Z5WTHDwvaK4FquX/yKPvO84Y/rYJsYMfWDUzKrhGcbrQ== X-Received: by 2002:adf:e648:: with SMTP id b8mr47495556wrn.132.1555349201533; Mon, 15 Apr 2019 10:26:41 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id v190sm27094232wme.18.2019.04.15.10.26.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Apr 2019 10:26:40 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v4 bpf-next 05/15] bpf: introduce new bpf prog load flags "BPF_F_TEST_RND_HI32" Date: Mon, 15 Apr 2019 18:26:15 +0100 Message-Id: <1555349185-12508-6-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> References: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org x86_64 and AArch64 perhaps are two arches that running bpf testsuite frequently, however the zero extension insertion pass is not enabled for them because of their hardware support. It is critical to guarantee the pass correction as it is supposed to be enabled at default for a couple of other arches, for example PowerPC, SPARC, arm, NFP etc. Therefore, it would be very useful if there is a way to test this pass on for example x86_64. The test methodology employed by this set is "poisoning" useless bits. High 32-bit of a definition is randomized if it is identified as not used by any later instructions. Such randomization is only enabled under testing mode which is gated by the new bpf prog load flags "BPF_F_TEST_RND_HI32". Suggested-by: Alexei Starovoitov Signed-off-by: Jiong Wang --- include/uapi/linux/bpf.h | 18 ++++++++++++++++++ kernel/bpf/syscall.c | 4 +++- tools/include/uapi/linux/bpf.h | 18 ++++++++++++++++++ 3 files changed, 39 insertions(+), 1 deletion(-) diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index c26be24..89c85a3 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -258,6 +258,24 @@ enum bpf_attach_type { */ #define BPF_F_ANY_ALIGNMENT (1U << 1) +/* BPF_F_TEST_RND_HI32 is used in BPF_PROG_LOAD command for testing purpose. + * Verifier does sub-register def/use analysis and identifies instructions whose + * def only matters for low 32-bit, high 32-bit is never referenced later + * through implicit zero extension. Therefore verifier notifies JIT back-ends + * that it is safe to ignore clearing high 32-bit for these instructions. This + * saves some back-ends a lot of code-gen. However such optimization is not + * necessary on some arches, for example x86_64, arm64 etc, whose JIT back-ends + * hence hasn't used verifier's analysis result. But, we really want to have a + * way to be able to verify the correctness of the described optimization on + * x86_64 on which testsuites are frequently exercised. + * + * So, this flag is introduced. Once it is set, verifier will randomize high + * 32-bit for those instructions who has been identified as safe to ignore them. + * Then, if verifier is not doing correct analysis, such randomization will + * regress tests to expose bugs. + */ +#define BPF_F_TEST_RND_HI32 (1U << 2) + /* When BPF ldimm64's insn[0].src_reg != 0 then this can have * two extensions: * diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 92c9b8a..abe2804 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -1600,7 +1600,9 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr) if (CHECK_ATTR(BPF_PROG_LOAD)) return -EINVAL; - if (attr->prog_flags & ~(BPF_F_STRICT_ALIGNMENT | BPF_F_ANY_ALIGNMENT)) + if (attr->prog_flags & ~(BPF_F_STRICT_ALIGNMENT | + BPF_F_ANY_ALIGNMENT | + BPF_F_TEST_RND_HI32)) return -EINVAL; if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index c26be24..89c85a3 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -258,6 +258,24 @@ enum bpf_attach_type { */ #define BPF_F_ANY_ALIGNMENT (1U << 1) +/* BPF_F_TEST_RND_HI32 is used in BPF_PROG_LOAD command for testing purpose. + * Verifier does sub-register def/use analysis and identifies instructions whose + * def only matters for low 32-bit, high 32-bit is never referenced later + * through implicit zero extension. Therefore verifier notifies JIT back-ends + * that it is safe to ignore clearing high 32-bit for these instructions. This + * saves some back-ends a lot of code-gen. However such optimization is not + * necessary on some arches, for example x86_64, arm64 etc, whose JIT back-ends + * hence hasn't used verifier's analysis result. But, we really want to have a + * way to be able to verify the correctness of the described optimization on + * x86_64 on which testsuites are frequently exercised. + * + * So, this flag is introduced. Once it is set, verifier will randomize high + * 32-bit for those instructions who has been identified as safe to ignore them. + * Then, if verifier is not doing correct analysis, such randomization will + * regress tests to expose bugs. + */ +#define BPF_F_TEST_RND_HI32 (1U << 2) + /* When BPF ldimm64's insn[0].src_reg != 0 then this can have * two extensions: * From patchwork Mon Apr 15 17:26:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1085811 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="oT0nNo2j"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44jb5g4YhZz9ryj for ; Tue, 16 Apr 2019 03:26:47 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727970AbfDOR0r (ORCPT ); Mon, 15 Apr 2019 13:26:47 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:34919 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727760AbfDOR0p (ORCPT ); Mon, 15 Apr 2019 13:26:45 -0400 Received: by mail-wr1-f68.google.com with SMTP id w1so23028559wrp.2 for ; Mon, 15 Apr 2019 10:26:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=1b3qUZXhygrF7uuBZNfDSFIG8Juij/yT3/+obg0W3Nw=; b=oT0nNo2jZga37QAR33WUJ4j7sJDb2uooOzHliG5pRRwXMQOflTVbCRLQXsqWbX0AJ/ ONxwUetQIk4FsdHkETeA+RS9nWLP5BLXK/LMw/ktDcw5BmJKYxf1W3HDkPDqMFXsNA10 4bxXEiajP1iJjTbbiratLw0fKyFxPtKV2WUiGROiUxM3Xp8dpsS2AN+2JWc6NBLbj+6/ DbapIvxjo93Y27YsCqAI4eyFqIJAWBg7hhwiAREATWmPQk8U2C8CWMvcUEqb8AZun9m0 XpDWiqDy/eP344ICZMR3dkH2VboksnA4RxqEsHNiSyxXyWYe1lJQUlQfHz7P/hW0e5XD ZOdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=1b3qUZXhygrF7uuBZNfDSFIG8Juij/yT3/+obg0W3Nw=; b=W8C44F3cihc/rwReLbJPJh1fgmMaa0bnmdshjXeaRUZsESZxIcTvpbLix9hQp/o7fu fW9v1PdkG4x37ZRl8N80qJm9CZlarrQWrES6IiUymbmHrVlNimfIwvSaeI1fYptTm7aq yBIfXz2hsFKzoHqla/o4DQq8Bs17sPsm57h0gVIg3QRgJFY6UFBhB7U88RI8l4jJA02z A9J8Xse6BXMR1t+QDj/UM2CDRMtI0LiPHx47oPWmxQ4uqVXgw6CL7uaLQJt8nasDgaNz NMDklpgoEbjGFIMSZyegvbJueBfjqElFYsHfYpc1XIERjg6S0XJuf6RcqHzTbBWMCNB5 JQhw== X-Gm-Message-State: APjAAAXYbztoA9RTVF5vj5nsTIYFwMt7O2ktNQ9giaW8P5soJLwxQZae p4bpUJaguxP1zq7BnljZITOmaQ== X-Google-Smtp-Source: APXvYqyqGtku7jPFeVsTf5EEBau9o060bGd0infZHFSDKt5nn5FN7ii0XUYGrODdoTIh4m6Zx90P4A== X-Received: by 2002:a5d:69c1:: with SMTP id s1mr1835253wrw.245.1555349203031; Mon, 15 Apr 2019 10:26:43 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id v190sm27094232wme.18.2019.04.15.10.26.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Apr 2019 10:26:42 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v4 bpf-next 06/15] bpf: randomize high 32-bit when BPF_F_TEST_RND_HI32 is set Date: Mon, 15 Apr 2019 18:26:16 +0100 Message-Id: <1555349185-12508-7-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> References: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org This patch randomizes high 32-bit of a definition when BPF_F_TEST_RND_HI32 is set. It does this once the flag set no matter there is hardware zero extension support or not. Because this is a test feature and we want to deliver the most stressful test. Suggested-by: Alexei Starovoitov Signed-off-by: Jiong Wang --- kernel/bpf/verifier.c | 85 ++++++++++++++++++++++++++++++++++++++++----------- 1 file changed, 68 insertions(+), 17 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 33d7e54..03c4443 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7555,24 +7555,70 @@ static int opt_remove_nops(struct bpf_verifier_env *env) return 0; } -static int opt_subreg_zext_lo32(struct bpf_verifier_env *env) +static int opt_subreg_zext_lo32_rnd_hi32(struct bpf_verifier_env *env, + const union bpf_attr *attr) { struct bpf_insn_aux_data orig_aux, *aux = env->insn_aux_data; + struct bpf_insn *patch, zext_patch[3], rnd_hi32_patch[4]; + int i, patch_len, delta = 0, len = env->prog->len; struct bpf_insn *insns = env->prog->insnsi; - int i, delta = 0, len = env->prog->len; - struct bpf_insn zext_patch[3]; struct bpf_prog *new_prog; + bool rnd_hi32; + + rnd_hi32 = attr->prog_flags & BPF_F_TEST_RND_HI32; zext_patch[1] = BPF_ALU64_IMM(BPF_LSH, 0, 32); zext_patch[2] = BPF_ALU64_IMM(BPF_RSH, 0, 32); + rnd_hi32_patch[1] = BPF_ALU64_IMM(BPF_MOV, BPF_REG_AX, 0); + rnd_hi32_patch[2] = BPF_ALU64_IMM(BPF_LSH, BPF_REG_AX, 32); + rnd_hi32_patch[3] = BPF_ALU64_REG(BPF_OR, 0, BPF_REG_AX); for (i = 0; i < len; i++) { int adj_idx = i + delta; struct bpf_insn insn; - if (!aux[adj_idx].zext_dst) + insn = insns[adj_idx]; + if (!aux[adj_idx].zext_dst) { + u8 code, class; + u32 imm_rnd; + + if (!rnd_hi32) + continue; + + code = insn.code; + class = BPF_CLASS(code); + /* Insns doesn't define any value. */ + if (class == BPF_JMP || class == BPF_JMP32 || + class == BPF_STX || class == BPF_ST) + continue; + + /* NOTE: arg "reg" is only used for BPF_STX, as it has + * been ruled out in above check, it is safe to + * pass NULL here. + */ + if (is_reg64(env, &insn, insn.dst_reg, NULL, DST_OP)) { + if (class == BPF_LD && + BPF_MODE(code) == BPF_IMM) + i++; + continue; + } + + /* ctx load could be transformed into wider load. */ + if (class == BPF_LDX && + aux[adj_idx].ptr_type == PTR_TO_CTX) + continue; + + imm_rnd = get_random_int(); + rnd_hi32_patch[0] = insns[adj_idx]; + rnd_hi32_patch[1].imm = imm_rnd; + rnd_hi32_patch[3].dst_reg = insn.dst_reg; + patch = rnd_hi32_patch; + patch_len = 4; + goto apply_patch_buffer; + } + + if (bpf_jit_hardware_zext()) continue; - insn = insns[adj_idx]; /* "adjust_insn_aux_data" only retains the original insn aux * data if insn at patched offset is at the end of the patch * buffer. That is to say, given the following insn sequence: @@ -7615,15 +7661,18 @@ static int opt_subreg_zext_lo32(struct bpf_verifier_env *env) zext_patch[0] = insns[adj_idx]; zext_patch[1].dst_reg = insn.dst_reg; zext_patch[2].dst_reg = insn.dst_reg; + patch = zext_patch; + patch_len = 3; +apply_patch_buffer: memcpy(&orig_aux, &aux[adj_idx], sizeof(orig_aux)); - new_prog = bpf_patch_insn_data(env, adj_idx, zext_patch, 3); + new_prog = bpf_patch_insn_data(env, adj_idx, patch, patch_len); if (!new_prog) return -ENOMEM; env->prog = new_prog; insns = new_prog->insnsi; aux = env->insn_aux_data; memcpy(&aux[adj_idx], &orig_aux, sizeof(orig_aux)); - delta += 2; + delta += patch_len - 1; } return 0; @@ -8460,16 +8509,18 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, if (ret == 0) ret = check_max_stack_depth(env); - /* Instruction rewrites happen after this point. - * For offload target, finalize hook has all aux insn info, do any - * customized work there. - */ - if (ret == 0 && !bpf_jit_hardware_zext() && - !bpf_prog_is_dev_bound(env->prog->aux)) { - ret = opt_subreg_zext_lo32(env); - env->prog->aux->no_verifier_zext = !!ret; - } else { - env->prog->aux->no_verifier_zext = true; + /* Instruction rewrites happen after this point. */ + if (ret == 0) { + if (bpf_prog_is_dev_bound(env->prog->aux)) { + /* For offload target, finalize hook has all aux insn + * info, copy the analysis result at there. + */ + env->prog->aux->no_verifier_zext = true; + } else { + ret = opt_subreg_zext_lo32_rnd_hi32(env, attr); + env->prog->aux->no_verifier_zext = + bpf_jit_hardware_zext() ? true : !!ret; + } } if (is_priv) { From patchwork Mon Apr 15 17:26:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1085826 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="hgVYZkU1"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44jb646W9cz9s9N for ; Tue, 16 Apr 2019 03:27:08 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727985AbfDOR0s (ORCPT ); Mon, 15 Apr 2019 13:26:48 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:38208 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727964AbfDOR0p (ORCPT ); Mon, 15 Apr 2019 13:26:45 -0400 Received: by mail-wr1-f67.google.com with SMTP id k11so23035158wro.5 for ; Mon, 15 Apr 2019 10:26:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=MpCitH7OitwdS3gEZKicgK34bxZPlB54bkEREbm5EmI=; b=hgVYZkU1VQccgkXXo3AG63bkgg41x9wm9w2OorzWerLWjDFmbl+bi5iquy4xDNYeaU RZG3MfTQkVQgekOIy/Np3sqfnZnm5TkvvWNuuLz61VV9l653Ooqvrm6l1c6Kv7MQ6cJm gjADICaOsjJqyRDDaQd+0POrVTVDoimtaHRYGaFsT+SokNL8DrS0DuvXh+55C5J7FjOs 6NdH6HX9eJeqLJ8RrZbWU2dJ+bstZ0qVW0wv9UMkhahSFzphORdqS5+vB8Gq+E9tWLkk J+pZ2Q13oxKKXciu/YdLQDANDBBiYqRc/7j2Fyj0k7A2Aj+mDIYHgrW7fUW3bQuORa7w n6qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=MpCitH7OitwdS3gEZKicgK34bxZPlB54bkEREbm5EmI=; b=E+iMsmjhYLXGJV6m9OXGZCzV5Kqg8bhINWmIeaC3ZPo1BRTjx2/59VXf04T/yhZ8MU 6aN6eORN95MqtNiAPFT3/JNSbuzl/eTirgUrJXcl2M13A8tIpoJvLzIDwaL0WJbdfabn tigbJsbGE3pDSZnZ/o4YhdFE+UHhR4xi5zDkbgVGPYGvNG+XJMjkzxlG2dHV7M3Izbzs sQK/NBbSOvwZ70we+MEv7TaAcdzVu1LbHqaReLsQt3B+tKTalSXbWvOfe3lGAL5ncCuX KVW6WCBvukEtUmboZepCbUifOiTcvx+EYMRlno7JHZoYPynVrfomJ/XIgwUsXYDr/GGJ huuQ== X-Gm-Message-State: APjAAAUzvnQuYQlb5Desah5oMGobGQ6sSwJOhFQ61HuWFy4hZ8SZHNIR VETweK/geeYmSSLzr1fJ2UjFtw== X-Google-Smtp-Source: APXvYqzSj0NqAivAVT89Q7Lh1tNCDtSzP5a4oPosEbgN7u38qYgEaAFA46urjcfT8gdtovRlC7pJCw== X-Received: by 2002:adf:f102:: with SMTP id r2mr32977826wro.136.1555349204047; Mon, 15 Apr 2019 10:26:44 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id v190sm27094232wme.18.2019.04.15.10.26.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Apr 2019 10:26:43 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v4 bpf-next 07/15] libbpf: add "prog_flags" to bpf_program/bpf_prog_load_attr/bpf_load_program_attr Date: Mon, 15 Apr 2019 18:26:17 +0100 Message-Id: <1555349185-12508-8-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> References: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org libbpf doesn't allow passing "prog_flags" during bpf program load in a couple of load related APIs, "bpf_load_program_xattr", "load_program" and "bpf_prog_load_xattr". It makes sense to allow passing "prog_flags" which is useful for customizing program loading. Reviewed-by: Jakub Kicinski Signed-off-by: Jiong Wang --- tools/lib/bpf/bpf.c | 1 + tools/lib/bpf/bpf.h | 1 + tools/lib/bpf/libbpf.c | 3 +++ tools/lib/bpf/libbpf.h | 1 + 4 files changed, 6 insertions(+) diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c index 955191c..f79ec49 100644 --- a/tools/lib/bpf/bpf.c +++ b/tools/lib/bpf/bpf.c @@ -254,6 +254,7 @@ int bpf_load_program_xattr(const struct bpf_load_program_attr *load_attr, if (load_attr->name) memcpy(attr.prog_name, load_attr->name, min(strlen(load_attr->name), BPF_OBJ_NAME_LEN - 1)); + attr.prog_flags = load_attr->prog_flags; fd = sys_bpf_prog_load(&attr, sizeof(attr)); if (fd >= 0) diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h index bc30783..a983442 100644 --- a/tools/lib/bpf/bpf.h +++ b/tools/lib/bpf/bpf.h @@ -86,6 +86,7 @@ struct bpf_load_program_attr { const void *line_info; __u32 line_info_cnt; __u32 log_level; + __u32 prog_flags; }; /* Flags to direct loading requirements */ diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index e5b77ad..e0affd0 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -182,6 +182,7 @@ struct bpf_program { void *line_info; __u32 line_info_rec_size; __u32 line_info_cnt; + __u32 prog_flags; }; enum libbpf_map_type { @@ -1876,6 +1877,7 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt, load_attr.line_info_rec_size = prog->line_info_rec_size; load_attr.line_info_cnt = prog->line_info_cnt; load_attr.log_level = prog->log_level; + load_attr.prog_flags = prog->prog_flags; if (!load_attr.insns || !load_attr.insns_cnt) return -EINVAL; @@ -3320,6 +3322,7 @@ int bpf_prog_load_xattr(const struct bpf_prog_load_attr *attr, expected_attach_type); prog->log_level = attr->log_level; + prog->prog_flags = attr->prog_flags; if (!first_prog) first_prog = prog; } diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index c5ff005..5abc237 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -320,6 +320,7 @@ struct bpf_prog_load_attr { enum bpf_attach_type expected_attach_type; int ifindex; int log_level; + int prog_flags; }; LIBBPF_API int bpf_prog_load_xattr(const struct bpf_prog_load_attr *attr, From patchwork Mon Apr 15 17:26:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1085827 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="iJJel91Y"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44jb6672zDz9ryj for ; Tue, 16 Apr 2019 03:27:10 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728039AbfDOR1J (ORCPT ); Mon, 15 Apr 2019 13:27:09 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:39352 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727968AbfDOR0s (ORCPT ); Mon, 15 Apr 2019 13:26:48 -0400 Received: by mail-wr1-f66.google.com with SMTP id j9so22995116wrn.6 for ; Mon, 15 Apr 2019 10:26:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=WoOM3rYmPYJxVZhgrctnlEaRiC1Bq/772bv5PR4BN8E=; b=iJJel91Y2TA5ejvhac46Xw4zjN8ycYObw85MUC3wbXYtRk+UyWE4MCZcAO7ATDRaBB ssDhgqvREH++U5Q4p1Qk20bT0pyIsWOT+ScdiLrT9fUK0ac4byJias9zEXtqpnZWk4WE u35SrdrhjRQciR9GTiJIC0rV6nMh189fE8J2S6x8qUh7bDHlJ5S2o4uGn9ZCif+kHJJi YaefmCoq3zLVadVSHsbDORA1bpGzRp7qw4e+3dR03VK4h0TOccOQjHCH++yhWmtY1JZN 2q1ZR0Y31yAtbrfjgVkgUkvUWDwVX9pLGhkYXZKR1WKjE9ub+isTNtS9/YjzX4ke36xb N2GQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=WoOM3rYmPYJxVZhgrctnlEaRiC1Bq/772bv5PR4BN8E=; b=BUpsHNzoMjRwYtV7zUPGF/3EO4AoueCU+ox7CaNkfGiGY8GgmZz4hYwcERQXPtqWsy n83FLaYmj6jFhT89tvxlVBCN5r3Jd8pz1cEKQ29gdAAGthAVvbgeK/VopS6mWzHad1Q3 g5eHapAr7QvQKzEPNkQ8vVFN+c2hP4R/0HPYZPahOHGQB1o4GTPevA0WxhI5mNw3Ask5 wmEqB0vLXYGshFWYm/smpgoVC2qzOOc+Uq7c38meSfBUYe10Bft0bsxgoIukU3FKqL+C 9aPD8TcY24Bz2Gwvyc+r/M9M+jN22cLaqO0uvXZCk3cu9iJ/Nlw+A/vMXLt4ZwVyECh6 ecmA== X-Gm-Message-State: APjAAAXbyIdmrbH7GXGyEOidIndLpcKPyCPguOyI+v8tVbOkJHF4fupt m3Qxi/9HrbTUQCD0iQcRKF26Nw== X-Google-Smtp-Source: APXvYqwsVklPaW9y6FZIypjPe7VpwzVjDynGpiF933iZ8ByUuz5LxfwJq19Fuq9SK4LTsvcmy6jnDA== X-Received: by 2002:a05:6000:1292:: with SMTP id f18mr4986551wrx.115.1555349204988; Mon, 15 Apr 2019 10:26:44 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id v190sm27094232wme.18.2019.04.15.10.26.44 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Apr 2019 10:26:44 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v4 bpf-next 08/15] selftests: enable hi32 randomization for all tests Date: Mon, 15 Apr 2019 18:26:18 +0100 Message-Id: <1555349185-12508-9-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> References: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The previous libbpf patch allows user to specify "prog_flags" to bpf program load APIs. To enable high 32-bit randomization for a test, we need to set BPF_F_TEST_RND_HI32 in "prog_flags". To enable such randomization for all tests, we need to make sure all places are passing BPF_F_TEST_RND_HI32. Changing them one by one is not convenient, also, it would be better if a test could be switched to "normal" running mode without code change. Given the program load APIs used across bpf selftests are mostly: bpf_prog_load: load from file bpf_load_program: load from raw insns A test_stub.c is implemented for bpf seltests, it offers two functions for testing purpose: bpf_prog_test_load bpf_test_load_program The are the same as "bpf_prog_load" and "bpf_load_program", except they also set BPF_F_TEST_RND_HI32. Given *_xattr functions are the APIs to customize any "prog_flags", it makes little sense to put these two functions into libbpf. Then, the following CFLAGS are passed to compilations for host programs: -Dbpf_prog_load=bpf_prog_test_load -Dbpf_load_program=bpf_test_load_program They migrate the used load APIs to the test version, hence enable high 32-bit randomization for these tests without changing source code. "test_verifier" is using bpf_verify_program which supports passing "prog_flags" already, so we are fine. But, two unit tests needs to be adjusted due to there will be 16-bit jump distance overflow: - "ld_abs: vlan + abs, test1" The function "bpf_fill_ld_abs_vlan_push_pop" inside test_verifier.c needs to use ALU64 to avoid insertion of hi32 randomization sequence that would overflow the sequence. - bpf_fill_jump_around_ld_abs needs to consider hi32 randomization to the load dst of ld_abs. Besides all these, there are several testcases are using "bpf_prog_load_attr" directly, their call sites are updated to pass BPF_F_TEST_RND_HI32. Signed-off-by: Jiong Wang --- tools/testing/selftests/bpf/Makefile | 10 +++--- .../selftests/bpf/prog_tests/bpf_verif_scale.c | 1 + tools/testing/selftests/bpf/test_sock_addr.c | 1 + tools/testing/selftests/bpf/test_sock_fields.c | 1 + tools/testing/selftests/bpf/test_socket_cookie.c | 1 + tools/testing/selftests/bpf/test_stub.c | 40 ++++++++++++++++++++++ tools/testing/selftests/bpf/test_verifier.c | 6 ++-- 7 files changed, 53 insertions(+), 7 deletions(-) create mode 100644 tools/testing/selftests/bpf/test_stub.c diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index f9d83ba..f2accf6 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -15,7 +15,9 @@ LLC ?= llc LLVM_OBJCOPY ?= llvm-objcopy LLVM_READELF ?= llvm-readelf BTF_PAHOLE ?= pahole -CFLAGS += -Wall -O2 -I$(APIDIR) -I$(LIBDIR) -I$(BPFDIR) -I$(GENDIR) $(GENFLAGS) -I../../../include +CFLAGS += -Wall -O2 -I$(APIDIR) -I$(LIBDIR) -I$(BPFDIR) -I$(GENDIR) $(GENFLAGS) -I../../../include \ + -Dbpf_prog_load=bpf_prog_test_load \ + -Dbpf_load_program=bpf_test_load_program LDLIBS += -lcap -lelf -lrt -lpthread # Order correspond to 'make run_tests' order @@ -76,9 +78,9 @@ $(OUTPUT)/urandom_read: $(OUTPUT)/%: %.c BPFOBJ := $(OUTPUT)/libbpf.a -$(TEST_GEN_PROGS): $(BPFOBJ) +$(TEST_GEN_PROGS): test_stub.o $(BPFOBJ) -$(TEST_GEN_PROGS_EXTENDED): $(OUTPUT)/libbpf.a +$(TEST_GEN_PROGS_EXTENDED): test_stub.o $(OUTPUT)/libbpf.a $(OUTPUT)/test_dev_cgroup: cgroup_helpers.c $(OUTPUT)/test_skb_cgroup_id_user: cgroup_helpers.c @@ -174,7 +176,7 @@ $(ALU32_BUILD_DIR)/test_progs_32: test_progs.c $(OUTPUT)/libbpf.a\ $(ALU32_BUILD_DIR)/urandom_read $(CC) $(TEST_PROGS_CFLAGS) $(CFLAGS) \ -o $(ALU32_BUILD_DIR)/test_progs_32 \ - test_progs.c trace_helpers.c prog_tests/*.c \ + test_progs.c test_stub.c trace_helpers.c prog_tests/*.c \ $(OUTPUT)/libbpf.a $(LDLIBS) $(ALU32_BUILD_DIR)/test_progs_32: $(PROG_TESTS_H) diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_verif_scale.c b/tools/testing/selftests/bpf/prog_tests/bpf_verif_scale.c index 23b159d..2623d15 100644 --- a/tools/testing/selftests/bpf/prog_tests/bpf_verif_scale.c +++ b/tools/testing/selftests/bpf/prog_tests/bpf_verif_scale.c @@ -22,6 +22,7 @@ static int check_load(const char *file) attr.file = file; attr.prog_type = BPF_PROG_TYPE_SCHED_CLS; attr.log_level = 4; + attr.prog_flags = BPF_F_TEST_RND_HI32; err = bpf_prog_load_xattr(&attr, &obj, &prog_fd); bpf_object__close(obj); if (err) diff --git a/tools/testing/selftests/bpf/test_sock_addr.c b/tools/testing/selftests/bpf/test_sock_addr.c index 3f110ea..5d0c4f0 100644 --- a/tools/testing/selftests/bpf/test_sock_addr.c +++ b/tools/testing/selftests/bpf/test_sock_addr.c @@ -745,6 +745,7 @@ static int load_path(const struct sock_addr_test *test, const char *path) attr.file = path; attr.prog_type = BPF_PROG_TYPE_CGROUP_SOCK_ADDR; attr.expected_attach_type = test->expected_attach_type; + attr.prog_flags = BPF_F_TEST_RND_HI32; if (bpf_prog_load_xattr(&attr, &obj, &prog_fd)) { if (test->expected_result != LOAD_REJECT) diff --git a/tools/testing/selftests/bpf/test_sock_fields.c b/tools/testing/selftests/bpf/test_sock_fields.c index dcae7f6..f08c8ee 100644 --- a/tools/testing/selftests/bpf/test_sock_fields.c +++ b/tools/testing/selftests/bpf/test_sock_fields.c @@ -339,6 +339,7 @@ int main(int argc, char **argv) struct bpf_prog_load_attr attr = { .file = "test_sock_fields_kern.o", .prog_type = BPF_PROG_TYPE_CGROUP_SKB, + .prog_flags = BPF_F_TEST_RND_HI32, }; int cgroup_fd, egress_fd, ingress_fd, err; struct bpf_program *ingress_prog; diff --git a/tools/testing/selftests/bpf/test_socket_cookie.c b/tools/testing/selftests/bpf/test_socket_cookie.c index e51d637..cac8ee5 100644 --- a/tools/testing/selftests/bpf/test_socket_cookie.c +++ b/tools/testing/selftests/bpf/test_socket_cookie.c @@ -148,6 +148,7 @@ static int run_test(int cgfd) memset(&attr, 0, sizeof(attr)); attr.file = SOCKET_COOKIE_PROG; attr.prog_type = BPF_PROG_TYPE_UNSPEC; + attr.prog_flags = BPF_F_TEST_RND_HI32; err = bpf_prog_load_xattr(&attr, &pobj, &prog_fd); if (err) { diff --git a/tools/testing/selftests/bpf/test_stub.c b/tools/testing/selftests/bpf/test_stub.c new file mode 100644 index 0000000..84e81a8 --- /dev/null +++ b/tools/testing/selftests/bpf/test_stub.c @@ -0,0 +1,40 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* Copyright (C) 2019 Netronome Systems, Inc. */ + +#include +#include +#include + +int bpf_prog_test_load(const char *file, enum bpf_prog_type type, + struct bpf_object **pobj, int *prog_fd) +{ + struct bpf_prog_load_attr attr; + + memset(&attr, 0, sizeof(struct bpf_prog_load_attr)); + attr.file = file; + attr.prog_type = type; + attr.expected_attach_type = 0; + attr.prog_flags = BPF_F_TEST_RND_HI32; + + return bpf_prog_load_xattr(&attr, pobj, prog_fd); +} + +int bpf_test_load_program(enum bpf_prog_type type, const struct bpf_insn *insns, + size_t insns_cnt, const char *license, + __u32 kern_version, char *log_buf, + size_t log_buf_sz) +{ + struct bpf_load_program_attr load_attr; + + memset(&load_attr, 0, sizeof(struct bpf_load_program_attr)); + load_attr.prog_type = type; + load_attr.expected_attach_type = 0; + load_attr.name = NULL; + load_attr.insns = insns; + load_attr.insns_cnt = insns_cnt; + load_attr.license = license; + load_attr.kern_version = kern_version; + load_attr.prog_flags = BPF_F_TEST_RND_HI32; + + return bpf_load_program_xattr(&load_attr, log_buf, log_buf_sz); +} diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c index e2ebcad..a5eacc8 100644 --- a/tools/testing/selftests/bpf/test_verifier.c +++ b/tools/testing/selftests/bpf/test_verifier.c @@ -161,7 +161,7 @@ static void bpf_fill_ld_abs_vlan_push_pop(struct bpf_test *self) goto loop; for (; i < len - 1; i++) - insn[i] = BPF_ALU32_IMM(BPF_MOV, BPF_REG_0, 0xbef); + insn[i] = BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0xbef); insn[len - 1] = BPF_EXIT_INSN(); self->prog_len = len; } @@ -170,7 +170,7 @@ static void bpf_fill_jump_around_ld_abs(struct bpf_test *self) { struct bpf_insn *insn = self->fill_insns; /* jump range is limited to 16 bit. every ld_abs is replaced by 6 insns */ - unsigned int len = (1 << 15) / 6; + unsigned int len = (1 << 15) / 9; int i = 0; insn[i++] = BPF_MOV64_REG(BPF_REG_6, BPF_REG_1); @@ -783,7 +783,7 @@ static void do_test_single(struct bpf_test *test, bool unpriv, if (fixup_skips != skips) return; - pflags = 0; + pflags = BPF_F_TEST_RND_HI32; if (test->flags & F_LOAD_WITH_STRICT_ALIGNMENT) pflags |= BPF_F_STRICT_ALIGNMENT; if (test->flags & F_NEEDS_EFFICIENT_UNALIGNED_ACCESS) From patchwork Mon Apr 15 17:26:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1085812 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="FJp4BOJ8"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44jb5k3l6Hz9ryj for ; Tue, 16 Apr 2019 03:26:50 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727993AbfDOR0t (ORCPT ); Mon, 15 Apr 2019 13:26:49 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:37104 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727973AbfDOR0s (ORCPT ); Mon, 15 Apr 2019 13:26:48 -0400 Received: by mail-wr1-f67.google.com with SMTP id w10so23048965wrm.4 for ; Mon, 15 Apr 2019 10:26:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=u7fZb3IEqFJRYe0oJUr+G1NSo0sv1IV8cJXsSnOcig0=; b=FJp4BOJ8ZFFTw6SkPFnVfy4eM6wNv0nuPSUjFBB7cOjhmsA3Y43xCkEZAcOqWbfiV5 wZOPi9STV1hNVnzvNLPL1PH0L/gnRVUjbVV9Q79dp8OSrWioFZOLh8sPNnp9D/si0fDc ra82qQvNFXTn2VudaYQH2zrbprJkzirVYQvSVJbM1ynQCDQTS0tUtat0YU5s0EJhfXKo eDv9Wwqpc/XA7tCbK/iZjK19GhiAS0Q9TO9B0wqlaVAFT/6sjXuJklAERuQ+7Y2NyhbH T0r/1t8GFu5R8faoTJz5c05JCrvePImI8xpB4/ocuV4SdhO78BidZ1RCBcmv5omS0XU4 Y6fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=u7fZb3IEqFJRYe0oJUr+G1NSo0sv1IV8cJXsSnOcig0=; b=fTpmFxY7QuVi5R/GykhMs/RfIfGluwLGiwpS53ZKQOd2bPFZ22XxYnQUXSYJHt57jO TH6v+RIErzvcrFT1cNJ7tj1gVDLOtrg6NX9L6HqCBvg9PFlyu7esJQiPeMdwPxTXGMYp PsPgOnsr3QkSmDgEHWDlxlbTHtBuGnlLqMQiQMEA6SRh4/vjAQ1XdCVArtxoSvud75kT CMRkcGAwFMwwJDoTQiMnrAZLM5ncnna199O/C/sivCaKrQ8NKR5HJtXCGXp9ncR6Uazf J6CO9RaAhcI4/AcYuVoR+koKGH1+Z887e/dwUEq4c/95z0JTJ7YGsR+/KPha9GU6wbja hjCg== X-Gm-Message-State: APjAAAWhLzDp1YBIwOwXZOUw+D6gj9OBmydsNaWDg7kFBMCkupzkcCTo yBCltzgWx9L1Kot58KV7aagFLw== X-Google-Smtp-Source: APXvYqydvtS8ZUV+079FjiEAh8mMtGvrRvMp6LY5K3ubiqz7ZeQn16g8P73gKuuumknuF1hVT58N9g== X-Received: by 2002:a5d:6a89:: with SMTP id s9mr35063047wru.58.1555349206493; Mon, 15 Apr 2019 10:26:46 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id v190sm27094232wme.18.2019.04.15.10.26.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Apr 2019 10:26:45 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang , Shubham Bansal Subject: [PATCH v4 bpf-next 09/15] arm: bpf: eliminate zero extension code-gen Date: Mon, 15 Apr 2019 18:26:19 +0100 Message-Id: <1555349185-12508-10-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> References: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Cc: Shubham Bansal Signed-off-by: Jiong Wang --- arch/arm/net/bpf_jit_32.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c index c8bfbbf..8cecd06 100644 --- a/arch/arm/net/bpf_jit_32.c +++ b/arch/arm/net/bpf_jit_32.c @@ -736,7 +736,8 @@ static inline void emit_a32_alu_r64(const bool is64, const s8 dst[], /* ALU operation */ emit_alu_r(rd[1], rs, true, false, op, ctx); - emit_a32_mov_i(rd[0], 0, ctx); + if (ctx->prog->aux->no_verifier_zext) + emit_a32_mov_i(rd[0], 0, ctx); } arm_bpf_put_reg64(dst, rd, ctx); @@ -758,8 +759,9 @@ static inline void emit_a32_mov_r64(const bool is64, const s8 dst[], struct jit_ctx *ctx) { if (!is64) { emit_a32_mov_r(dst_lo, src_lo, ctx); - /* Zero out high 4 bytes */ - emit_a32_mov_i(dst_hi, 0, ctx); + if (ctx->prog->aux->no_verifier_zext) + /* Zero out high 4 bytes */ + emit_a32_mov_i(dst_hi, 0, ctx); } else if (__LINUX_ARM_ARCH__ < 6 && ctx->cpu_architecture < CPU_ARCH_ARMv5TE) { /* complete 8 byte move */ @@ -1438,7 +1440,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) } emit_udivmod(rd_lo, rd_lo, rt, ctx, BPF_OP(code)); arm_bpf_put_reg32(dst_lo, rd_lo, ctx); - emit_a32_mov_i(dst_hi, 0, ctx); + if (ctx->prog->aux->no_verifier_zext) + emit_a32_mov_i(dst_hi, 0, ctx); break; case BPF_ALU64 | BPF_DIV | BPF_K: case BPF_ALU64 | BPF_DIV | BPF_X: @@ -1453,7 +1456,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) return -EINVAL; if (imm) emit_a32_alu_i(dst_lo, imm, ctx, BPF_OP(code)); - emit_a32_mov_i(dst_hi, 0, ctx); + if (ctx->prog->aux->no_verifier_zext) + emit_a32_mov_i(dst_hi, 0, ctx); break; /* dst = dst << imm */ case BPF_ALU64 | BPF_LSH | BPF_K: @@ -1488,7 +1492,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) /* dst = ~dst */ case BPF_ALU | BPF_NEG: emit_a32_alu_i(dst_lo, 0, ctx, BPF_OP(code)); - emit_a32_mov_i(dst_hi, 0, ctx); + if (ctx->prog->aux->no_verifier_zext) + emit_a32_mov_i(dst_hi, 0, ctx); break; /* dst = ~dst (64 bit) */ case BPF_ALU64 | BPF_NEG: @@ -1838,6 +1843,11 @@ void bpf_jit_compile(struct bpf_prog *prog) /* Nothing to do here. We support Internal BPF. */ } +bool bpf_jit_hardware_zext(void) +{ + return false; +} + struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) { struct bpf_prog *tmp, *orig_prog = prog; From patchwork Mon Apr 15 17:26:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1085814 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="tIhBuQAo"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44jb5m1mCfz9ryj for ; Tue, 16 Apr 2019 03:26:52 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728002AbfDOR0u (ORCPT ); Mon, 15 Apr 2019 13:26:50 -0400 Received: from mail-wm1-f66.google.com ([209.85.128.66]:35397 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727988AbfDOR0t (ORCPT ); Mon, 15 Apr 2019 13:26:49 -0400 Received: by mail-wm1-f66.google.com with SMTP id y197so9677432wmd.0 for ; Mon, 15 Apr 2019 10:26:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FyyxR9Iqk6cAH0oBjCaJb0bcVV0NGMOOW5Oun5gFHdM=; b=tIhBuQAoKVRJTr19mtaGPXzMDDghSdaL/gtvMVpWTZWY8q4UqqITsjDffYDaZXbbez JtfFLqgWHsLE/vFBacaNNPJKH8xQUHUMQiA5SGmQvG2zPPnrI53dLCaE1jA+18xzTbmo SU98/JT3B/424k8Bu4u1M+ezXgBinCb9vjgd5Kc4OAmb1EesS5fMhBZpGUzpx7luB+4J /tHCaWC5UnEQJ3UMML5aewa54dRe2prNpQ6NFL0ihiCYXUgC5+wC/2HyHBEtLRAkUD2k +Qwe1G2e9zLbUb9H0mpKOe0G3ELNRfgmfj/cjzXSEBORTcgoTm5GyElC/Wz5fKOcm/TV zoLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FyyxR9Iqk6cAH0oBjCaJb0bcVV0NGMOOW5Oun5gFHdM=; b=K2sT00aWIf4u5dw7Pl0hSFyDZfqLZzwtnFe/CMxRTyRXZaueurE/cwPQGywriugYVd iAY+OazkTj+yWHKxfYf0Ug/93hnX6tH4xTb6G1MDu4asUF7tneZFDAokijANjFG+v2II aQzlXNr+ZT/Hm/e5MQoptEsN+QtExnu9ke7cmpnaUe2JlMcvG2Ncjo0JWDGwMMJoYIwr 1VDcccNMSYOeU55MB+4wojsSS1YnCxO88mfCI258Y3IVx0toOPnfxDnpUMARb+I6X8jJ aTxS0fZSqKHtVQeHHADDQXZgmVAFOtDJmCc1oSFR9WyJ9/kKjLLkNG19O0WMM3AWoOEQ SF6Q== X-Gm-Message-State: APjAAAUG6JZ6Yd2VFwr+hgpOYh+JNwNBdw9uRQ3qR9CJ4XHVOFAO4QTc jlPvGtNV/GOONZ+WJkRmBchrJA== X-Google-Smtp-Source: APXvYqwb1d6xMllY/yYYm5kHOV1I4dOB4a26DfPgr3jgknkgge8GMC6UrIlolJ1wWUqzxxCh8IFV4w== X-Received: by 2002:a1c:f115:: with SMTP id p21mr22711833wmh.93.1555349207762; Mon, 15 Apr 2019 10:26:47 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id v190sm27094232wme.18.2019.04.15.10.26.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Apr 2019 10:26:46 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang , "Naveen N . Rao" , Sandipan Das Subject: [PATCH v4 bpf-next 10/15] powerpc: bpf: eliminate zero extension code-gen Date: Mon, 15 Apr 2019 18:26:20 +0100 Message-Id: <1555349185-12508-11-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> References: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Cc: Naveen N. Rao Cc: Sandipan Das Signed-off-by: Jiong Wang --- arch/powerpc/net/bpf_jit_comp64.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c index 21a1dcd..d10621b 100644 --- a/arch/powerpc/net/bpf_jit_comp64.c +++ b/arch/powerpc/net/bpf_jit_comp64.c @@ -559,7 +559,7 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, bpf_alu32_trunc: /* Truncate to 32-bits */ - if (BPF_CLASS(code) == BPF_ALU) + if (BPF_CLASS(code) == BPF_ALU && fp->aux->no_verifier_zext) PPC_RLWINM(dst_reg, dst_reg, 0, 0, 31); break; @@ -1046,6 +1046,11 @@ struct powerpc64_jit_data { struct codegen_context ctx; }; +bool bpf_jit_hardware_zext(void) +{ + return false; +} + struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp) { u32 proglen; From patchwork Mon Apr 15 17:26:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1085813 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="1rFRMcPM"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44jb5k54dkz9s9N for ; Tue, 16 Apr 2019 03:26:50 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727997AbfDOR0u (ORCPT ); Mon, 15 Apr 2019 13:26:50 -0400 Received: from mail-wm1-f67.google.com ([209.85.128.67]:53102 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727996AbfDOR0t (ORCPT ); Mon, 15 Apr 2019 13:26:49 -0400 Received: by mail-wm1-f67.google.com with SMTP id a184so21665319wma.2 for ; Mon, 15 Apr 2019 10:26:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/wylRhv97waN1hLNQRjtw5HFd7aLKXwfbfkeZTfcGns=; b=1rFRMcPMkLXAdQWYIG4OQZyv5lTUdPNUUFk+bSUMS0PNHIbjIfudJ2LMnAeYwuQviZ 0UpeZrJ4nUZMbE/uPJtvdFdhJvzfkbh4Gw9qokrUlIuvuZfFppeBve/JcALp7IBPWaXd gTlEdNyxIJy61JkUzB23LgZccjW/sgNJrj+Bc0qEaDnVOkPFOx637B8iP0SfVKAVfRPK mWMfvRiGI3VlEdwUpv9xZZ99bz7HL6Gp/c1grwZlwG57cjyHFkilevY0cF35l0LAWf+L ETtXAuqNy8uDJo+NiJXrIr10Or3vN6G/NsqnFY0JWEHV/5ngKFZbYfCcj6QvgSWqGXMu dpEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/wylRhv97waN1hLNQRjtw5HFd7aLKXwfbfkeZTfcGns=; b=fq9bkBe1l5DiNWctidCiXecaTweRf4L7K9ZSiVnfNwKH19CESx3KcUXuBlwYlCl2SQ n+zrFsVkbCuF5EwL7wc/fHvoxR4khRZs801kLmYo6Fz+pLJqtjgxgESRzAc1yE54GLYm tNgh3e0IBaB81BL+TGnTM5deaNGt8jyc7FDxklhiQzUusEzo2tXrHWpIcLjtuayygGrI cLaDRoyQ8JVRxBStb3A9CrHfHxoXhyusOkK7/6PYS/7wy5ImQGgfHbbVLf0iTe/yDgq0 th1N9GVQQa0HuaHBkn9lHAlC3aW5K8QW+ormm5taRbavbjLUinoKJXJxlk/IaMmdg9ks 32pA== X-Gm-Message-State: APjAAAW1F1ikIjGsQ7ToPdXibOphHU0WVvGW2fJJHhHcYZE3/mEZBNIH 49z0SeIJZcJ/Y5fcABjSq+iOBg== X-Google-Smtp-Source: APXvYqz/QT58iBJa8jehbPPJU47dajP4i3GDxrmy5OLV/3ph0RCMz9QjhWw7jm9L3rgJlXvgtekYnA== X-Received: by 2002:a1c:7e8a:: with SMTP id z132mr24087701wmc.92.1555349208789; Mon, 15 Apr 2019 10:26:48 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id v190sm27094232wme.18.2019.04.15.10.26.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Apr 2019 10:26:48 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang , Martin Schwidefsky , Heiko Carstens Subject: [PATCH v4 bpf-next 11/15] s390: bpf: eliminate zero extension code-gen Date: Mon, 15 Apr 2019 18:26:21 +0100 Message-Id: <1555349185-12508-12-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> References: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org Cc: Martin Schwidefsky Cc: Heiko Carstens Signed-off-by: Jiong Wang --- arch/s390/net/bpf_jit_comp.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c index 51dd026..59592d7 100644 --- a/arch/s390/net/bpf_jit_comp.c +++ b/arch/s390/net/bpf_jit_comp.c @@ -299,9 +299,11 @@ static inline void reg_set_seen(struct bpf_jit *jit, u32 b1) #define EMIT_ZERO(b1) \ ({ \ - /* llgfr %dst,%dst (zero extend to 64 bit) */ \ - EMIT4(0xb9160000, b1, b1); \ - REG_SET_SEEN(b1); \ + if (fp->aux->no_verifier_zext) { \ + /* llgfr %dst,%dst (zero extend to 64 bit) */ \ + EMIT4(0xb9160000, b1, b1); \ + REG_SET_SEEN(b1); \ + } \ }) /* @@ -1282,6 +1284,11 @@ static int bpf_jit_prog(struct bpf_jit *jit, struct bpf_prog *fp) return 0; } +bool bpf_jit_hardware_zext(void) +{ + return false; +} + /* * Compile eBPF program "fp" */ From patchwork Mon Apr 15 17:26:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1085817 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="lmX9h9ZV"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44jb5s2p5pz9ryj for ; Tue, 16 Apr 2019 03:26:57 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728010AbfDOR04 (ORCPT ); Mon, 15 Apr 2019 13:26:56 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:34753 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727998AbfDOR0v (ORCPT ); Mon, 15 Apr 2019 13:26:51 -0400 Received: by mail-wr1-f65.google.com with SMTP id p10so23030004wrq.1 for ; Mon, 15 Apr 2019 10:26:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Pi+jMPgfzj2fBkAwzCkG0vyOGnD4cZfJGlYpVzkjSMY=; b=lmX9h9ZV5BfsgW6Qbi07yWC0V78sfrm2KHsIxvSkbG+u8TJ8k2i3w8bEcN0+b4M2Ip Sr9OmNJk22Lj55iXqsugQZSsM00xFmhvjQsHRldxqFomrj/kiklZJtU2AuZaecOrVa1p 8MDx9PK0w2CG6PCk2SElIGMce7n7KN1Kv2lW4reQtZ/r9eaQwxWM0f0lSd/fR+2uhUsF PgGlHddb/o0nClxjDlXAgyqP6qNhuBaOE8AZtdRRpvGqmwym3MOf/ieffRSNCSlpvLyd PdiJ0owJApvTlCFI8kSUi6Dqe+vegvozOj1MNq/s8sReC9cN486Jzp+Q97avSRDOyHxn cKDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Pi+jMPgfzj2fBkAwzCkG0vyOGnD4cZfJGlYpVzkjSMY=; b=OvDhy3e7AJUkwX5U8MI65PLHV3igG0x8WaYwPvatnujdqNu35iw3/9wJaTfw5/BETK 7r2+eQWTwm+pShKnoCulCJ61sBKdSY7k2hI19esKgjjpiH2EpeKYcMSpLS2zgd3q+Ybo a2uBPx2fanvh4yZlm175iU3cFdYA25v2Tk/CqJ5rNPvyo1Aevs6gbQeNJSY0w63nuU36 HgXITllZkbjDRkhILEIFTB76XWSF9cn5cCiL/P6ff7C4SEYwM6/u18oUsRtS+8YvHsjr Ni+ZBlISH9tMslE2bpMjWVF6iC6jk00B+Cbh8GZMgw9tN97FSAk3osepWIvQsoqcrL74 y+Aw== X-Gm-Message-State: APjAAAU2C8Qbg49ojbqJb51LuP4Xe6lsn5lcrPgf906ITJS39BfgIcVI fpYKBC5lKMcVY+t0R0R31g4mTg== X-Google-Smtp-Source: APXvYqxJx5rtod4HgpqSbR4jQnRzjAd4JLhkmgw0wHzOAqhiyN86zoqvYB+uJD9ZRmnvwdiqDVTFmQ== X-Received: by 2002:a5d:69c7:: with SMTP id s7mr20139226wrw.71.1555349209771; Mon, 15 Apr 2019 10:26:49 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id v190sm27094232wme.18.2019.04.15.10.26.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Apr 2019 10:26:49 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang , "David S . Miller" Subject: [PATCH v4 bpf-next 12/15] sparc: bpf: eliminate zero extension code-gen Date: Mon, 15 Apr 2019 18:26:22 +0100 Message-Id: <1555349185-12508-13-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> References: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Cc: David S. Miller Signed-off-by: Jiong Wang --- arch/sparc/net/bpf_jit_comp_64.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c index 65428e7..e58f84e 100644 --- a/arch/sparc/net/bpf_jit_comp_64.c +++ b/arch/sparc/net/bpf_jit_comp_64.c @@ -1144,7 +1144,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) break; do_alu32_trunc: - if (BPF_CLASS(code) == BPF_ALU) + if (BPF_CLASS(code) == BPF_ALU && + ctx->prog->aux->no_verifier_zext) emit_alu_K(SRL, dst, 0, ctx); break; @@ -1432,6 +1433,11 @@ static void jit_fill_hole(void *area, unsigned int size) *ptr++ = 0x91d02005; /* ta 5 */ } +bool bpf_jit_hardware_zext(void) +{ + return false; +} + struct sparc64_jit_data { struct bpf_binary_header *header; u8 *image; From patchwork Mon Apr 15 17:26:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1085816 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="Ge9OY2Fa"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44jb5q4QXjz9ryj for ; Tue, 16 Apr 2019 03:26:55 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728017AbfDOR0y (ORCPT ); Mon, 15 Apr 2019 13:26:54 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:50692 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728003AbfDOR0w (ORCPT ); Mon, 15 Apr 2019 13:26:52 -0400 Received: by mail-wm1-f68.google.com with SMTP id z11so21789091wmi.0 for ; Mon, 15 Apr 2019 10:26:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=esldtJjtJ7LjAyiBdO+8AWliqTz0U0S4jG86huzM9ss=; b=Ge9OY2Fa//JdcR4OnzpqVXxc5GHUR+d1IHjI1P0qggnAU8QgjR8qR8mdHVFZYO6K5x MQL+fRyNpDdACt9AFklY9T97+KAdxwStndXVXQ4qBvD+LpCJDmaZtU7aNHvSPCmtCOTO Oc+2QHzGFUc8C1IT5bJD9P3AZWiVPS/1rurZXsTEapjMx04o8mHU4GsLVUHPFMghsNlg p5RxbL6U1zBRbOWli70vgVH26jc50tccZaan/f0L0P0z7ow9ABJB99BQFCjQSzq9TmiP evoZcDnphpXfYOhHjW3CO+vdys2oXH/5ze2Gk/g7ZIFp/AMlmiwNPhipMdIqDkvGL5cW G8jA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=esldtJjtJ7LjAyiBdO+8AWliqTz0U0S4jG86huzM9ss=; b=kO2lv0J7pH/ZzddO6f8LAA8DuzclB06UkkGZMOFW0zg4zL+ptOxKd5wswmQawVNIgu F6LXNt/EMmNDWPqHTI0BTVfJK7X7AmbKlvu0b+6S8P3i+H2c88/Aexi5tYRkv2lmqqNk UE3JvpuW6Bf/ydfEit234bqqm5I0CXG4yAd8VyrFdIFAb8PkR5Yr05JPWnpXhomX/PnY mmfEtaTJnZq89nzgYcv0dBEes1Za8Vei+onwOYtu9pnRE2/HEdG7btCfgpVXZ7t8kXc+ Ntyb7NvcMbKcqTt7bsE54+QRQSaN5o/YKgtOTaTxsKcB/IOi/2DkWAMBcgAPMyrMWsg+ 8IsA== X-Gm-Message-State: APjAAAUPsqQEw25bIzFBALhNlg2F0QUuErmDcy+XgHAodPo8H2p4FDqa 7666jua3KxCTIN+n1cWQzdSjKQ== X-Google-Smtp-Source: APXvYqxPNSFdcnZF4/ezwnlVsZsZOJ8m2vf/vlGTX67WCmlVLNw1F/gN9SyptisCzchWqS7MiHyCLw== X-Received: by 2002:a1c:1b45:: with SMTP id b66mr23626856wmb.77.1555349210952; Mon, 15 Apr 2019 10:26:50 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id v190sm27094232wme.18.2019.04.15.10.26.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Apr 2019 10:26:50 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang , Wang YanQing Subject: [PATCH v4 bpf-next 13/15] x32: bpf: eliminate zero extension code-gen Date: Mon, 15 Apr 2019 18:26:23 +0100 Message-Id: <1555349185-12508-14-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> References: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Cc: Wang YanQing Signed-off-by: Jiong Wang --- arch/x86/net/bpf_jit_comp32.c | 32 ++++++++++++++++++++++---------- 1 file changed, 22 insertions(+), 10 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c index 0d9cdff..8c6cf22 100644 --- a/arch/x86/net/bpf_jit_comp32.c +++ b/arch/x86/net/bpf_jit_comp32.c @@ -567,7 +567,7 @@ static inline void emit_ia32_alu_r(const bool is64, const bool hi, const u8 op, static inline void emit_ia32_alu_r64(const bool is64, const u8 op, const u8 dst[], const u8 src[], bool dstk, bool sstk, - u8 **pprog) + u8 **pprog, const struct bpf_prog_aux *aux) { u8 *prog = *pprog; @@ -575,7 +575,7 @@ static inline void emit_ia32_alu_r64(const bool is64, const u8 op, if (is64) emit_ia32_alu_r(is64, true, op, dst_hi, src_hi, dstk, sstk, &prog); - else + else if (aux->no_verifier_zext) emit_ia32_mov_i(dst_hi, 0, dstk, &prog); *pprog = prog; } @@ -666,7 +666,8 @@ static inline void emit_ia32_alu_i(const bool is64, const bool hi, const u8 op, /* ALU operation (64 bit) */ static inline void emit_ia32_alu_i64(const bool is64, const u8 op, const u8 dst[], const u32 val, - bool dstk, u8 **pprog) + bool dstk, u8 **pprog, + const struct bpf_prog_aux *aux) { u8 *prog = *pprog; u32 hi = 0; @@ -677,7 +678,7 @@ static inline void emit_ia32_alu_i64(const bool is64, const u8 op, emit_ia32_alu_i(is64, false, op, dst_lo, val, dstk, &prog); if (is64) emit_ia32_alu_i(is64, true, op, dst_hi, hi, dstk, &prog); - else + else if (aux->no_verifier_zext) emit_ia32_mov_i(dst_hi, 0, dstk, &prog); *pprog = prog; @@ -1690,11 +1691,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, switch (BPF_SRC(code)) { case BPF_X: emit_ia32_alu_r64(is64, BPF_OP(code), dst, - src, dstk, sstk, &prog); + src, dstk, sstk, &prog, + bpf_prog->aux); break; case BPF_K: emit_ia32_alu_i64(is64, BPF_OP(code), dst, - imm32, dstk, &prog); + imm32, dstk, &prog, + bpf_prog->aux); break; } break; @@ -1713,7 +1716,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, false, &prog); break; } - emit_ia32_mov_i(dst_hi, 0, dstk, &prog); + if (bpf_prog->aux->no_verifier_zext) + emit_ia32_mov_i(dst_hi, 0, dstk, &prog); break; case BPF_ALU | BPF_LSH | BPF_X: case BPF_ALU | BPF_RSH | BPF_X: @@ -1733,7 +1737,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, &prog); break; } - emit_ia32_mov_i(dst_hi, 0, dstk, &prog); + if (bpf_prog->aux->no_verifier_zext) + emit_ia32_mov_i(dst_hi, 0, dstk, &prog); break; /* dst = dst / src(imm) */ /* dst = dst % src(imm) */ @@ -1755,7 +1760,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, &prog); break; } - emit_ia32_mov_i(dst_hi, 0, dstk, &prog); + if (bpf_prog->aux->no_verifier_zext) + emit_ia32_mov_i(dst_hi, 0, dstk, &prog); break; case BPF_ALU64 | BPF_DIV | BPF_K: case BPF_ALU64 | BPF_DIV | BPF_X: @@ -1772,7 +1778,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, EMIT2_off32(0xC7, add_1reg(0xC0, IA32_ECX), imm32); emit_ia32_shift_r(BPF_OP(code), dst_lo, IA32_ECX, dstk, false, &prog); - emit_ia32_mov_i(dst_hi, 0, dstk, &prog); + if (bpf_prog->aux->no_verifier_zext) + emit_ia32_mov_i(dst_hi, 0, dstk, &prog); break; /* dst = dst << imm */ case BPF_ALU64 | BPF_LSH | BPF_K: @@ -2367,6 +2374,11 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, return proglen; } +bool bpf_jit_hardware_zext(void) +{ + return false; +} + struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) { struct bpf_binary_header *header = NULL; From patchwork Mon Apr 15 17:26:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1085818 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="nYPovF/0"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44jb5s5TCPz9s9N for ; Tue, 16 Apr 2019 03:26:57 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728003AbfDOR04 (ORCPT ); Mon, 15 Apr 2019 13:26:56 -0400 Received: from mail-wm1-f66.google.com ([209.85.128.66]:40907 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728010AbfDOR0y (ORCPT ); Mon, 15 Apr 2019 13:26:54 -0400 Received: by mail-wm1-f66.google.com with SMTP id z24so21624417wmi.5 for ; Mon, 15 Apr 2019 10:26:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pNHJxDIKLsiKxKEQK4Ngr2CSz1DZ9y91uELS2TzjMyw=; b=nYPovF/0wBamE3/GDJKHdWR14F+2GLElFFej/mx+kTiMAGQ4c5Di/ShF8G0GDw2wpT Ltxmpuw1cAEfpTxxy4CtAVLBtqYo5iyW9Va5kJX2L2LuXQUL/9+VFv9xr3IfbGE7CPY6 kkp66cizC7Kzbutdrs3DXc12AgtOIVCt/MTFK3dp5l2zVWGZMq/6O8qFxvw/XjSgakiG vmUWLU3cTqTSoXQhBef746AlyOXHY3tw3iYy6H0sRlrN9EjhX/wLFAagrQk7kQ4Eag3H 0zNXicts/rTCKdqZ+esKhvM2Vz6SWImheF71DkRfJDP/wPDnm0bJqbx2Qg58e2VOjNiq 1m4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pNHJxDIKLsiKxKEQK4Ngr2CSz1DZ9y91uELS2TzjMyw=; b=csL6jFETyY79jlm7P2IIAkM8an3e2cIwnFjxEEQ0WlrrNqZ3mZjdTsnzq4Tr8RAyga nd8F8cJ1pDUnfrwHyIZvPp2i3kHh7CFEk693GxXm8jQycwfC3rZlewQAcsTj/U+F7a7f T5y93IJgrQr2he2u4Przzwa5ov8RdBZsb0RtAi3XFwMp51s86La1Zlc2ggVlesCyz6IP 5WaGo8+si8RGmqIXUtQ6nODc3dMDMAKwpw+0ajPJL8v4+IByn3Oeg8kcThPR8N9w/9bt sStoh5IzCxMp5UCYOooq7zudkkDHQ69yyfgxZ6obevIJ329Okd9PxMroHn8NN7DOPUN9 cuxw== X-Gm-Message-State: APjAAAVclGt2ouz3l1eV1g4HZ8WK2khQcIyAlXAcTTCsJsN2Bfqgu06u fZbsgXha787ZGCZp2SsZ6cmRzQ== X-Google-Smtp-Source: APXvYqw8amdBh0ONW5U+jqjNA1j/TAWQ40R/UxJaRzYhhB6Lv3jYoeF8NmEqkcPKSadyOe8ZZbWpAg== X-Received: by 2002:a1c:9c03:: with SMTP id f3mr23928629wme.67.1555349211893; Mon, 15 Apr 2019 10:26:51 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id v190sm27094232wme.18.2019.04.15.10.26.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Apr 2019 10:26:51 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= Subject: [PATCH v4 bpf-next 14/15] riscv: bpf: eliminate zero extension code-gen Date: Mon, 15 Apr 2019 18:26:24 +0100 Message-Id: <1555349185-12508-15-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> References: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> MIME-Version: 1.0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org CC: Björn Töpel Signed-off-by: Jiong Wang Acked-by: Björn Töpel --- arch/riscv/net/bpf_jit_comp.c | 32 +++++++++++++++++++------------- 1 file changed, 19 insertions(+), 13 deletions(-) diff --git a/arch/riscv/net/bpf_jit_comp.c b/arch/riscv/net/bpf_jit_comp.c index 80b12aa..9cba262 100644 --- a/arch/riscv/net/bpf_jit_comp.c +++ b/arch/riscv/net/bpf_jit_comp.c @@ -731,6 +731,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, { bool is64 = BPF_CLASS(insn->code) == BPF_ALU64 || BPF_CLASS(insn->code) == BPF_JMP; + struct bpf_prog_aux *aux = ctx->prog->aux; int rvoff, i = insn - ctx->prog->insnsi; u8 rd = -1, rs = -1, code = insn->code; s16 off = insn->off; @@ -743,7 +744,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, case BPF_ALU | BPF_MOV | BPF_X: case BPF_ALU64 | BPF_MOV | BPF_X: emit(is64 ? rv_addi(rd, rs, 0) : rv_addiw(rd, rs, 0), ctx); - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; @@ -771,19 +772,19 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, case BPF_ALU | BPF_MUL | BPF_X: case BPF_ALU64 | BPF_MUL | BPF_X: emit(is64 ? rv_mul(rd, rd, rs) : rv_mulw(rd, rd, rs), ctx); - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_DIV | BPF_X: case BPF_ALU64 | BPF_DIV | BPF_X: emit(is64 ? rv_divu(rd, rd, rs) : rv_divuw(rd, rd, rs), ctx); - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_MOD | BPF_X: case BPF_ALU64 | BPF_MOD | BPF_X: emit(is64 ? rv_remu(rd, rd, rs) : rv_remuw(rd, rd, rs), ctx); - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_LSH | BPF_X: @@ -867,7 +868,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, case BPF_ALU | BPF_MOV | BPF_K: case BPF_ALU64 | BPF_MOV | BPF_K: emit_imm(rd, imm, ctx); - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; @@ -882,7 +883,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit(is64 ? rv_add(rd, rd, RV_REG_T1) : rv_addw(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_SUB | BPF_K: @@ -895,7 +896,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit(is64 ? rv_sub(rd, rd, RV_REG_T1) : rv_subw(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_AND | BPF_K: @@ -906,7 +907,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(rv_and(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_OR | BPF_K: @@ -917,7 +918,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(rv_or(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_XOR | BPF_K: @@ -928,7 +929,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(rv_xor(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_MUL | BPF_K: @@ -936,7 +937,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(is64 ? rv_mul(rd, rd, RV_REG_T1) : rv_mulw(rd, rd, RV_REG_T1), ctx); - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_DIV | BPF_K: @@ -944,7 +945,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(is64 ? rv_divu(rd, rd, RV_REG_T1) : rv_divuw(rd, rd, RV_REG_T1), ctx); - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_MOD | BPF_K: @@ -952,7 +953,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(is64 ? rv_remu(rd, rd, RV_REG_T1) : rv_remuw(rd, rd, RV_REG_T1), ctx); - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_LSH | BPF_K: @@ -1503,6 +1504,11 @@ static void bpf_flush_icache(void *start, void *end) flush_icache_range((unsigned long)start, (unsigned long)end); } +bool bpf_jit_hardware_zext(void) +{ + return false; +} + struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) { bool tmp_blinded = false, extra_pass = false; From patchwork Mon Apr 15 17:26:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1085819 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="fGNHMh/S"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44jb5x2yftz9s9T for ; Tue, 16 Apr 2019 03:27:01 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727998AbfDOR07 (ORCPT ); Mon, 15 Apr 2019 13:26:59 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:34760 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727895AbfDOR0z (ORCPT ); Mon, 15 Apr 2019 13:26:55 -0400 Received: by mail-wr1-f68.google.com with SMTP id p10so23030224wrq.1 for ; Mon, 15 Apr 2019 10:26:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=qxA1L/gVWESQpZ3Mokilcchz/Sf49i0tM5B88NgcAXI=; b=fGNHMh/S4Ip66Mu37dXdYgiMYnIQaBV7cvE9F8cmzlmxCh1t5bU4UFJBHSbmTjE/Mk 9x1myLH2dbKWxyiR3Rf3zJQhf625wQeVCTBK0lv5n5tikN7FIdp0K99PSOBX/tgIIVxN rFRUWAHtjhDnevhwkWXEecuho18nvG8BPLgctvTc+UXX6dR+LDSoLlQH4i+Y6SWJRDRy aeSWSBlDEjvhpe+Vs0xURjCMy0At2+8qshriNeMDooMuJX/CB0bQGep0yTOpQLsXgh1Y W2j2CXgW0lxW4gTqY5F7zyQgGMyXLWoiS3YlBmw3yxsOG8q05nyN2cRDa0g9/lpcOIsd HfjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=qxA1L/gVWESQpZ3Mokilcchz/Sf49i0tM5B88NgcAXI=; b=dH0zWK+Elwxozb2sEwDguoUiL2seeDn5zvxRIOHhYrtnsEKCfE9eL/gVUvC51aEftx UbaUmA5oH6dVbrmAbIQg+8qaaXjY9eTuRuA7zca5OwFlbrkEwddFm1Z9EfIuL8I9c4E4 Xay1uQ1YuYxlH1PMPjP8/lygliwSpvdUky/ig9MzD9V1UFtYDSJngYDMfxODIqwZmm10 opLLrPW6Xj2hYzSzKjQdg5mosNFgnrBTsKqUVzPo1g5lR7O9/yQotjGASODnDSh9eBIb CXjNU8DcAmSIVxXWfFIjFtJvF0CX2fIs9xvrycXGeXszylUZHj23AeI96DRVVoSJJjO6 F2ww== X-Gm-Message-State: APjAAAW1RQHD5hHhGhBsaT1DJXxr1LD36a71hySb1shqctNYJxkR33y6 cV4Rx6oiBzBTL6fXQ2LnOAyCAiYlrKs= X-Google-Smtp-Source: APXvYqwoLIAb9Xwqdp1Xdl2knZ8mcbtNF7GEoN4l1cWwaqxy14T8SagTESqqo601zz+unws97dV/Vg== X-Received: by 2002:a5d:4492:: with SMTP id j18mr5038268wrq.212.1555349213368; Mon, 15 Apr 2019 10:26:53 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id v190sm27094232wme.18.2019.04.15.10.26.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Apr 2019 10:26:52 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v4 bpf-next 15/15] nfp: bpf: eliminate zero extension code-gen Date: Mon, 15 Apr 2019 18:26:25 +0100 Message-Id: <1555349185-12508-16-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> References: <1555349185-12508-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch eliminate zero extension code-gen for instructions except load/store when possible. Elimination for load/store will be supported in up coming patches. Reviewed-by: Jakub Kicinski Signed-off-by: Jiong Wang --- drivers/net/ethernet/netronome/nfp/bpf/jit.c | 119 +++++++++++++--------- drivers/net/ethernet/netronome/nfp/bpf/main.h | 2 + drivers/net/ethernet/netronome/nfp/bpf/verifier.c | 12 +++ 3 files changed, 83 insertions(+), 50 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/bpf/jit.c b/drivers/net/ethernet/netronome/nfp/bpf/jit.c index f272247..eb30c52 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/jit.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/jit.c @@ -612,6 +612,13 @@ static void wrp_immed(struct nfp_prog *nfp_prog, swreg dst, u32 imm) } static void +wrp_zext(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, u8 dst) +{ + if (meta->flags & FLAG_INSN_DO_ZEXT) + wrp_immed(nfp_prog, reg_both(dst + 1), 0); +} + +static void wrp_immed_relo(struct nfp_prog *nfp_prog, swreg dst, u32 imm, enum nfp_relo_type relo) { @@ -847,7 +854,8 @@ static int nfp_cpp_memcpy(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) } static int -data_ld(struct nfp_prog *nfp_prog, swreg offset, u8 dst_gpr, int size) +data_ld(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, swreg offset, + u8 dst_gpr, int size) { unsigned int i; u16 shift, sz; @@ -870,14 +878,15 @@ data_ld(struct nfp_prog *nfp_prog, swreg offset, u8 dst_gpr, int size) wrp_mov(nfp_prog, reg_both(dst_gpr + i), reg_xfer(i)); if (i < 2) - wrp_immed(nfp_prog, reg_both(dst_gpr + 1), 0); + wrp_zext(nfp_prog, meta, dst_gpr); return 0; } static int -data_ld_host_order(struct nfp_prog *nfp_prog, u8 dst_gpr, - swreg lreg, swreg rreg, int size, enum cmd_mode mode) +data_ld_host_order(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + u8 dst_gpr, swreg lreg, swreg rreg, int size, + enum cmd_mode mode) { unsigned int i; u8 mask, sz; @@ -900,33 +909,34 @@ data_ld_host_order(struct nfp_prog *nfp_prog, u8 dst_gpr, wrp_mov(nfp_prog, reg_both(dst_gpr + i), reg_xfer(i)); if (i < 2) - wrp_immed(nfp_prog, reg_both(dst_gpr + 1), 0); + wrp_zext(nfp_prog, meta, dst_gpr); return 0; } static int -data_ld_host_order_addr32(struct nfp_prog *nfp_prog, u8 src_gpr, swreg offset, - u8 dst_gpr, u8 size) +data_ld_host_order_addr32(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + u8 src_gpr, swreg offset, u8 dst_gpr, u8 size) { - return data_ld_host_order(nfp_prog, dst_gpr, reg_a(src_gpr), offset, - size, CMD_MODE_32b); + return data_ld_host_order(nfp_prog, meta, dst_gpr, reg_a(src_gpr), + offset, size, CMD_MODE_32b); } static int -data_ld_host_order_addr40(struct nfp_prog *nfp_prog, u8 src_gpr, swreg offset, - u8 dst_gpr, u8 size) +data_ld_host_order_addr40(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + u8 src_gpr, swreg offset, u8 dst_gpr, u8 size) { swreg rega, regb; addr40_offset(nfp_prog, src_gpr, offset, ®a, ®b); - return data_ld_host_order(nfp_prog, dst_gpr, rega, regb, + return data_ld_host_order(nfp_prog, meta, dst_gpr, rega, regb, size, CMD_MODE_40b_BA); } static int -construct_data_ind_ld(struct nfp_prog *nfp_prog, u16 offset, u16 src, u8 size) +construct_data_ind_ld(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + u16 offset, u16 src, u8 size) { swreg tmp_reg; @@ -942,10 +952,12 @@ construct_data_ind_ld(struct nfp_prog *nfp_prog, u16 offset, u16 src, u8 size) emit_br_relo(nfp_prog, BR_BLO, BR_OFF_RELO, 0, RELO_BR_GO_ABORT); /* Load data */ - return data_ld(nfp_prog, imm_b(nfp_prog), 0, size); + return data_ld(nfp_prog, meta, imm_b(nfp_prog), 0, size); } -static int construct_data_ld(struct nfp_prog *nfp_prog, u16 offset, u8 size) +static int +construct_data_ld(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + u16 offset, u8 size) { swreg tmp_reg; @@ -956,7 +968,7 @@ static int construct_data_ld(struct nfp_prog *nfp_prog, u16 offset, u8 size) /* Load data */ tmp_reg = re_load_imm_any(nfp_prog, offset, imm_b(nfp_prog)); - return data_ld(nfp_prog, tmp_reg, 0, size); + return data_ld(nfp_prog, meta, tmp_reg, 0, size); } static int @@ -1193,7 +1205,7 @@ mem_op_stack(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, } if (clr_gpr && size < 8) - wrp_immed(nfp_prog, reg_both(gpr + 1), 0); + wrp_zext(nfp_prog, meta, gpr); while (size) { u32 slice_end; @@ -1294,9 +1306,10 @@ wrp_alu32_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, enum alu_op alu_op) { const struct bpf_insn *insn = &meta->insn; + u8 dst = insn->dst_reg * 2; - wrp_alu_imm(nfp_prog, insn->dst_reg * 2, alu_op, insn->imm); - wrp_immed(nfp_prog, reg_both(insn->dst_reg * 2 + 1), 0); + wrp_alu_imm(nfp_prog, dst, alu_op, insn->imm); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -1308,7 +1321,7 @@ wrp_alu32_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, u8 dst = meta->insn.dst_reg * 2, src = meta->insn.src_reg * 2; emit_alu(nfp_prog, reg_both(dst), reg_a(dst), alu_op, reg_b(src)); - wrp_immed(nfp_prog, reg_both(meta->insn.dst_reg * 2 + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2385,12 +2398,14 @@ static int neg_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) u8 dst = meta->insn.dst_reg * 2; emit_alu(nfp_prog, reg_both(dst), reg_imm(0), ALU_OP_SUB, reg_b(dst)); - wrp_immed(nfp_prog, reg_both(meta->insn.dst_reg * 2 + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } -static int __ashr_imm(struct nfp_prog *nfp_prog, u8 dst, u8 shift_amt) +static int +__ashr_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, u8 dst, + u8 shift_amt) { if (shift_amt) { /* Set signedness bit (MSB of result). */ @@ -2399,7 +2414,7 @@ static int __ashr_imm(struct nfp_prog *nfp_prog, u8 dst, u8 shift_amt) emit_shf(nfp_prog, reg_both(dst), reg_none(), SHF_OP_ASHR, reg_b(dst), SHF_SC_R_SHF, shift_amt); } - wrp_immed(nfp_prog, reg_both(dst + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2414,7 +2429,7 @@ static int ashr_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) umin = meta->umin_src; umax = meta->umax_src; if (umin == umax) - return __ashr_imm(nfp_prog, dst, umin); + return __ashr_imm(nfp_prog, meta, dst, umin); src = insn->src_reg * 2; /* NOTE: the first insn will set both indirect shift amount (source A) @@ -2423,7 +2438,7 @@ static int ashr_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) emit_alu(nfp_prog, reg_none(), reg_a(src), ALU_OP_OR, reg_b(dst)); emit_shf_indir(nfp_prog, reg_both(dst), reg_none(), SHF_OP_ASHR, reg_b(dst), SHF_SC_R_SHF); - wrp_immed(nfp_prog, reg_both(dst + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2433,15 +2448,17 @@ static int ashr_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) const struct bpf_insn *insn = &meta->insn; u8 dst = insn->dst_reg * 2; - return __ashr_imm(nfp_prog, dst, insn->imm); + return __ashr_imm(nfp_prog, meta, dst, insn->imm); } -static int __shr_imm(struct nfp_prog *nfp_prog, u8 dst, u8 shift_amt) +static int +__shr_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, u8 dst, + u8 shift_amt) { if (shift_amt) emit_shf(nfp_prog, reg_both(dst), reg_none(), SHF_OP_NONE, reg_b(dst), SHF_SC_R_SHF, shift_amt); - wrp_immed(nfp_prog, reg_both(dst + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2450,7 +2467,7 @@ static int shr_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) const struct bpf_insn *insn = &meta->insn; u8 dst = insn->dst_reg * 2; - return __shr_imm(nfp_prog, dst, insn->imm); + return __shr_imm(nfp_prog, meta, dst, insn->imm); } static int shr_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) @@ -2463,22 +2480,24 @@ static int shr_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) umin = meta->umin_src; umax = meta->umax_src; if (umin == umax) - return __shr_imm(nfp_prog, dst, umin); + return __shr_imm(nfp_prog, meta, dst, umin); src = insn->src_reg * 2; emit_alu(nfp_prog, reg_none(), reg_a(src), ALU_OP_OR, reg_imm(0)); emit_shf_indir(nfp_prog, reg_both(dst), reg_none(), SHF_OP_NONE, reg_b(dst), SHF_SC_R_SHF); - wrp_immed(nfp_prog, reg_both(dst + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } -static int __shl_imm(struct nfp_prog *nfp_prog, u8 dst, u8 shift_amt) +static int +__shl_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, u8 dst, + u8 shift_amt) { if (shift_amt) emit_shf(nfp_prog, reg_both(dst), reg_none(), SHF_OP_NONE, reg_b(dst), SHF_SC_L_SHF, shift_amt); - wrp_immed(nfp_prog, reg_both(dst + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2487,7 +2506,7 @@ static int shl_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) const struct bpf_insn *insn = &meta->insn; u8 dst = insn->dst_reg * 2; - return __shl_imm(nfp_prog, dst, insn->imm); + return __shl_imm(nfp_prog, meta, dst, insn->imm); } static int shl_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) @@ -2500,11 +2519,11 @@ static int shl_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) umin = meta->umin_src; umax = meta->umax_src; if (umin == umax) - return __shl_imm(nfp_prog, dst, umin); + return __shl_imm(nfp_prog, meta, dst, umin); src = insn->src_reg * 2; shl_reg64_lt32_low(nfp_prog, dst, src); - wrp_immed(nfp_prog, reg_both(dst + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2566,34 +2585,34 @@ static int imm_ld8(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) static int data_ld1(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { - return construct_data_ld(nfp_prog, meta->insn.imm, 1); + return construct_data_ld(nfp_prog, meta, meta->insn.imm, 1); } static int data_ld2(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { - return construct_data_ld(nfp_prog, meta->insn.imm, 2); + return construct_data_ld(nfp_prog, meta, meta->insn.imm, 2); } static int data_ld4(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { - return construct_data_ld(nfp_prog, meta->insn.imm, 4); + return construct_data_ld(nfp_prog, meta, meta->insn.imm, 4); } static int data_ind_ld1(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { - return construct_data_ind_ld(nfp_prog, meta->insn.imm, + return construct_data_ind_ld(nfp_prog, meta, meta->insn.imm, meta->insn.src_reg * 2, 1); } static int data_ind_ld2(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { - return construct_data_ind_ld(nfp_prog, meta->insn.imm, + return construct_data_ind_ld(nfp_prog, meta, meta->insn.imm, meta->insn.src_reg * 2, 2); } static int data_ind_ld4(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { - return construct_data_ind_ld(nfp_prog, meta->insn.imm, + return construct_data_ind_ld(nfp_prog, meta, meta->insn.imm, meta->insn.src_reg * 2, 4); } @@ -2632,7 +2651,7 @@ static int mem_ldx_skb(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, return -EOPNOTSUPP; } - wrp_immed(nfp_prog, reg_both(meta->insn.dst_reg * 2 + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2658,7 +2677,7 @@ static int mem_ldx_xdp(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, return -EOPNOTSUPP; } - wrp_immed(nfp_prog, reg_both(meta->insn.dst_reg * 2 + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2671,7 +2690,7 @@ mem_ldx_data(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, tmp_reg = re_load_imm_any(nfp_prog, meta->insn.off, imm_b(nfp_prog)); - return data_ld_host_order_addr32(nfp_prog, meta->insn.src_reg * 2, + return data_ld_host_order_addr32(nfp_prog, meta, meta->insn.src_reg * 2, tmp_reg, meta->insn.dst_reg * 2, size); } @@ -2683,7 +2702,7 @@ mem_ldx_emem(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, tmp_reg = re_load_imm_any(nfp_prog, meta->insn.off, imm_b(nfp_prog)); - return data_ld_host_order_addr40(nfp_prog, meta->insn.src_reg * 2, + return data_ld_host_order_addr40(nfp_prog, meta, meta->insn.src_reg * 2, tmp_reg, meta->insn.dst_reg * 2, size); } @@ -2744,7 +2763,7 @@ mem_ldx_data_from_pktcache_unaligned(struct nfp_prog *nfp_prog, wrp_reg_subpart(nfp_prog, dst_lo, src_lo, len_lo, off); if (!len_mid) { - wrp_immed(nfp_prog, dst_hi, 0); + wrp_zext(nfp_prog, meta, dst_gpr); return 0; } @@ -2752,7 +2771,7 @@ mem_ldx_data_from_pktcache_unaligned(struct nfp_prog *nfp_prog, if (size <= REG_WIDTH) { wrp_reg_or_subpart(nfp_prog, dst_lo, src_mid, len_mid, len_lo); - wrp_immed(nfp_prog, dst_hi, 0); + wrp_zext(nfp_prog, meta, dst_gpr); } else { swreg src_hi = reg_xfer(idx + 2); @@ -2783,10 +2802,10 @@ mem_ldx_data_from_pktcache_aligned(struct nfp_prog *nfp_prog, if (size < REG_WIDTH) { wrp_reg_subpart(nfp_prog, dst_lo, src_lo, size, 0); - wrp_immed(nfp_prog, dst_hi, 0); + wrp_zext(nfp_prog, meta, dst_gpr); } else if (size == REG_WIDTH) { wrp_mov(nfp_prog, dst_lo, src_lo); - wrp_immed(nfp_prog, dst_hi, 0); + wrp_zext(nfp_prog, meta, dst_gpr); } else { swreg src_hi = reg_xfer(idx + 1); diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.h b/drivers/net/ethernet/netronome/nfp/bpf/main.h index b25a482..7369bdf 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/main.h +++ b/drivers/net/ethernet/netronome/nfp/bpf/main.h @@ -249,6 +249,8 @@ struct nfp_bpf_reg_state { #define FLAG_INSN_SKIP_PREC_DEPENDENT BIT(4) /* Instruction is optimized by the verifier */ #define FLAG_INSN_SKIP_VERIFIER_OPT BIT(5) +/* Instruction needs to zero extend to high 32-bit */ +#define FLAG_INSN_DO_ZEXT BIT(6) #define FLAG_INSN_SKIP_MASK (FLAG_INSN_SKIP_NOOP | \ FLAG_INSN_SKIP_PREC_DEPENDENT | \ diff --git a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c index 36f56eb..e92ee51 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c @@ -744,6 +744,17 @@ static unsigned int nfp_bpf_get_stack_usage(struct nfp_prog *nfp_prog) goto continue_subprog; } +static void nfp_bpf_insn_flag_zext(struct nfp_prog *nfp_prog, + struct bpf_insn_aux_data *aux) +{ + struct nfp_insn_meta *meta; + + list_for_each_entry(meta, &nfp_prog->insns, l) { + if (aux[meta->n].zext_dst) + meta->flags |= FLAG_INSN_DO_ZEXT; + } +} + int nfp_bpf_finalize(struct bpf_verifier_env *env) { struct bpf_subprog_info *info; @@ -784,6 +795,7 @@ int nfp_bpf_finalize(struct bpf_verifier_env *env) return -EOPNOTSUPP; } + nfp_bpf_insn_flag_zext(nfp_prog, env->insn_aux_data); return 0; }