From patchwork Fri May 3 10:42:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1094779 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="tGQ4pI/n"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44wTJN3Kzqz9s6w for ; Fri, 3 May 2019 20:43:48 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727205AbfECKnq (ORCPT ); Fri, 3 May 2019 06:43:46 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:44949 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726546AbfECKno (ORCPT ); Fri, 3 May 2019 06:43:44 -0400 Received: by mail-wr1-f67.google.com with SMTP id c5so7229178wrs.11 for ; Fri, 03 May 2019 03:43:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=psOqrEx4MC1dIQwoRVOh9n1QoRI5gEPVeRgtzc+dkzs=; b=tGQ4pI/nSUKbdzaSgYP39GqmzfcJvWguCp5zRdElguae4pLg35VXpoefDOvEOP11OI nkWm3chH/vMA+dG5bqbSCdIFivMjMURasEA3rK3TYFbUz4l3ckWPJ7KRZufNuBPpUvT9 TG0oW4ihvNngWoD6NMhVoVSXXDskmPqPOu+wVeEtXT930X99IPkAkEW+usTYGSK5M/j8 q/9RyfiNz9DFcY9dcpv92BdILneNiX4h7OfM3RmANRoT6Gg62J9hwc2NBhZixXlf1vHR zfA8a+vobF517F9hEszMPJz3C/5LTfgLYr5t+dFXhXgj2wsmAFEZVZbPcRkgSp23bomC aiYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=psOqrEx4MC1dIQwoRVOh9n1QoRI5gEPVeRgtzc+dkzs=; b=BTRFNkOFDcf55pDZ1dikE4p8XcX5XWIrdC1hC4QqqGsENAnuN5FRmPeGUq1mHZoyni iM3ttbaqWi2QNMFLGbCcb+0ug3X/yhoxMUdvnWC+j6ukR33Qjt9UlyBE6CpRbtbzcuSH SrejKQS6ABb/Lt9fSvNGlJf/nK2CbZ5ca2gIXOpwSlaKctSthAe3GvTBG2PFor2GXXkv xqq3JfI17lpPJZxSzr/DEQZMfU9Zby01r0WwROrj5c1iy41MfgdumZO196RNGb9ji+lp OzKFyv3Gx5wlXtIxNmJafUMQrPfup6GEXGDrLKnSfwSZmLLUq7yd7rRTXaaIFKD9BznZ JtBw== X-Gm-Message-State: APjAAAW8EIvgAU+70xwwnOU7P1Lw+quBjn4VoOYRu+FVOTsRTOl6U9jW +kIcVwlDe0FhXZ+GgaVFDxLYwA== X-Google-Smtp-Source: APXvYqyRPirJPUnn3Vj9lT+mqr60g/cyo7gGexDA1w4rYsD+NL+Q/0EJ1Ww+iq6SaWc6A4MW7SqC5w== X-Received: by 2002:adf:e8c4:: with SMTP id k4mr6777589wrn.9.1556880222732; Fri, 03 May 2019 03:43:42 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id r29sm1716999wra.56.2019.05.03.03.43.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 May 2019 03:43:41 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v6 bpf-next 01/17] bpf: verifier: offer more accurate helper function arg and return type Date: Fri, 3 May 2019 11:42:28 +0100 Message-Id: <1556880164-10689-2-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> References: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org BPF helper call transfers execution from eBPF insns to native functions while verifier insn walker only walks eBPF insns. So, verifier can only knows argument and return value types from explicit helper function prototype descriptions. For 32-bit optimization, it is important to know whether argument (register use from eBPF insn) and return value (register define from external function) is 32-bit or 64-bit, so corresponding registers could be zero-extended correctly. For arguments, they are register uses, we conservatively treat all of them as 64-bit at default, while the following new bpf_arg_type are added so we could start to mark those frequently used helper functions with more accurate argument type. ARG_CONST_SIZE32 ARG_CONST_SIZE32_OR_ZERO ARG_ANYTHING32 A few helper functions shown up frequently inside Cilium bpf program are updated using these new types. For return values, they are register defs, we need to know accurate width for correct zero extensions. Given most of the helper functions returning integers return 32-bit value, a new RET_INTEGER64 is added to make those functions return 64-bit value. All related helper functions are updated. Signed-off-by: Jiong Wang --- include/linux/bpf.h | 6 +++++- kernel/bpf/core.c | 2 +- kernel/bpf/helpers.c | 10 +++++----- kernel/bpf/verifier.c | 15 ++++++++++----- kernel/trace/bpf_trace.c | 4 ++-- net/core/filter.c | 38 +++++++++++++++++++------------------- 6 files changed, 42 insertions(+), 33 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 9a21848..11a5fb9 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -198,9 +198,12 @@ enum bpf_arg_type { ARG_CONST_SIZE, /* number of bytes accessed from memory */ ARG_CONST_SIZE_OR_ZERO, /* number of bytes accessed from memory or 0 */ + ARG_CONST_SIZE32, /* Likewise, but size fits into 32-bit */ + ARG_CONST_SIZE32_OR_ZERO, /* Ditto */ ARG_PTR_TO_CTX, /* pointer to context */ ARG_ANYTHING, /* any (initialized) argument is ok */ + ARG_ANYTHING32, /* Likewise, but it is a 32-bit argument */ ARG_PTR_TO_SPIN_LOCK, /* pointer to bpf_spin_lock */ ARG_PTR_TO_SOCK_COMMON, /* pointer to sock_common */ ARG_PTR_TO_INT, /* pointer to int */ @@ -210,7 +213,8 @@ enum bpf_arg_type { /* type of values returned from helper functions */ enum bpf_return_type { - RET_INTEGER, /* function returns integer */ + RET_INTEGER, /* function returns 32-bit integer */ + RET_INTEGER64, /* function returns 64-bit integer */ RET_VOID, /* function doesn't return anything */ RET_PTR_TO_MAP_VALUE, /* returns a pointer to map elem value */ RET_PTR_TO_MAP_VALUE_OR_NULL, /* returns a pointer to map elem value or NULL */ diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index ace8c22..2792eda 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -2067,7 +2067,7 @@ const struct bpf_func_proto bpf_tail_call_proto = { .ret_type = RET_VOID, .arg1_type = ARG_PTR_TO_CTX, .arg2_type = ARG_CONST_MAP_PTR, - .arg3_type = ARG_ANYTHING, + .arg3_type = ARG_ANYTHING32, }; /* Stub for JITs that only support cBPF. eBPF programs are interpreted. diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 4266ffd..60f6e31 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -157,7 +157,7 @@ BPF_CALL_0(bpf_ktime_get_ns) const struct bpf_func_proto bpf_ktime_get_ns_proto = { .func = bpf_ktime_get_ns, .gpl_only = true, - .ret_type = RET_INTEGER, + .ret_type = RET_INTEGER64, }; BPF_CALL_0(bpf_get_current_pid_tgid) @@ -173,7 +173,7 @@ BPF_CALL_0(bpf_get_current_pid_tgid) const struct bpf_func_proto bpf_get_current_pid_tgid_proto = { .func = bpf_get_current_pid_tgid, .gpl_only = false, - .ret_type = RET_INTEGER, + .ret_type = RET_INTEGER64, }; BPF_CALL_0(bpf_get_current_uid_gid) @@ -193,7 +193,7 @@ BPF_CALL_0(bpf_get_current_uid_gid) const struct bpf_func_proto bpf_get_current_uid_gid_proto = { .func = bpf_get_current_uid_gid, .gpl_only = false, - .ret_type = RET_INTEGER, + .ret_type = RET_INTEGER64, }; BPF_CALL_2(bpf_get_current_comm, char *, buf, u32, size) @@ -221,7 +221,7 @@ const struct bpf_func_proto bpf_get_current_comm_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_UNINIT_MEM, - .arg2_type = ARG_CONST_SIZE, + .arg2_type = ARG_CONST_SIZE32, }; #if defined(CONFIG_QUEUED_SPINLOCKS) || defined(CONFIG_BPF_ARCH_SPINLOCK) @@ -331,7 +331,7 @@ BPF_CALL_0(bpf_get_current_cgroup_id) const struct bpf_func_proto bpf_get_current_cgroup_id_proto = { .func = bpf_get_current_cgroup_id, .gpl_only = false, - .ret_type = RET_INTEGER, + .ret_type = RET_INTEGER64, }; #ifdef CONFIG_CGROUP_BPF diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 2717172..07ab563 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2492,7 +2492,9 @@ static bool arg_type_is_mem_ptr(enum bpf_arg_type type) static bool arg_type_is_mem_size(enum bpf_arg_type type) { return type == ARG_CONST_SIZE || - type == ARG_CONST_SIZE_OR_ZERO; + type == ARG_CONST_SIZE_OR_ZERO || + type == ARG_CONST_SIZE32 || + type == ARG_CONST_SIZE32_OR_ZERO; } static bool arg_type_is_int_ptr(enum bpf_arg_type type) @@ -2526,7 +2528,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno, if (err) return err; - if (arg_type == ARG_ANYTHING) { + if (arg_type == ARG_ANYTHING || arg_type == ARG_ANYTHING32) { if (is_pointer_value(env, regno)) { verbose(env, "R%d leaks addr into helper function\n", regno); @@ -2554,7 +2556,9 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno, type != expected_type) goto err_type; } else if (arg_type == ARG_CONST_SIZE || - arg_type == ARG_CONST_SIZE_OR_ZERO) { + arg_type == ARG_CONST_SIZE_OR_ZERO || + arg_type == ARG_CONST_SIZE32 || + arg_type == ARG_CONST_SIZE32_OR_ZERO) { expected_type = SCALAR_VALUE; if (type != expected_type) goto err_type; @@ -2660,7 +2664,8 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno, meta->map_ptr->value_size, false, meta); } else if (arg_type_is_mem_size(arg_type)) { - bool zero_size_allowed = (arg_type == ARG_CONST_SIZE_OR_ZERO); + bool zero_size_allowed = (arg_type == ARG_CONST_SIZE_OR_ZERO || + arg_type == ARG_CONST_SIZE32_OR_ZERO); /* remember the mem_size which may be used later * to refine return values. @@ -3333,7 +3338,7 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn } /* update return register (already marked as written above) */ - if (fn->ret_type == RET_INTEGER) { + if (fn->ret_type == RET_INTEGER || fn->ret_type == RET_INTEGER64) { /* sets type to SCALAR_VALUE */ mark_reg_unknown(env, regs, BPF_REG_0); } else if (fn->ret_type == RET_VOID) { diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index 8607aba..f300b68 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -370,7 +370,7 @@ BPF_CALL_2(bpf_perf_event_read, struct bpf_map *, map, u64, flags) static const struct bpf_func_proto bpf_perf_event_read_proto = { .func = bpf_perf_event_read, .gpl_only = true, - .ret_type = RET_INTEGER, + .ret_type = RET_INTEGER64, .arg1_type = ARG_CONST_MAP_PTR, .arg2_type = ARG_ANYTHING, }; @@ -503,7 +503,7 @@ BPF_CALL_0(bpf_get_current_task) static const struct bpf_func_proto bpf_get_current_task_proto = { .func = bpf_get_current_task, .gpl_only = true, - .ret_type = RET_INTEGER, + .ret_type = RET_INTEGER64, }; BPF_CALL_2(bpf_current_task_under_cgroup, struct bpf_map *, map, u32, idx) diff --git a/net/core/filter.c b/net/core/filter.c index 55bfc94..72f2abe 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -1695,9 +1695,9 @@ static const struct bpf_func_proto bpf_skb_store_bytes_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, .arg3_type = ARG_PTR_TO_MEM, - .arg4_type = ARG_CONST_SIZE, + .arg4_type = ARG_CONST_SIZE32, .arg5_type = ARG_ANYTHING, }; @@ -1760,9 +1760,9 @@ static const struct bpf_func_proto bpf_flow_dissector_load_bytes_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, .arg3_type = ARG_PTR_TO_UNINIT_MEM, - .arg4_type = ARG_CONST_SIZE, + .arg4_type = ARG_CONST_SIZE32, }; BPF_CALL_5(bpf_skb_load_bytes_relative, const struct sk_buff *, skb, @@ -1911,7 +1911,7 @@ static const struct bpf_func_proto bpf_l3_csum_replace_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, .arg3_type = ARG_ANYTHING, .arg4_type = ARG_ANYTHING, .arg5_type = ARG_ANYTHING, @@ -1964,7 +1964,7 @@ static const struct bpf_func_proto bpf_l4_csum_replace_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, .arg3_type = ARG_ANYTHING, .arg4_type = ARG_ANYTHING, .arg5_type = ARG_ANYTHING, @@ -2003,9 +2003,9 @@ static const struct bpf_func_proto bpf_csum_diff_proto = { .pkt_access = true, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_MEM_OR_NULL, - .arg2_type = ARG_CONST_SIZE_OR_ZERO, + .arg2_type = ARG_CONST_SIZE32_OR_ZERO, .arg3_type = ARG_PTR_TO_MEM_OR_NULL, - .arg4_type = ARG_CONST_SIZE_OR_ZERO, + .arg4_type = ARG_CONST_SIZE32_OR_ZERO, .arg5_type = ARG_ANYTHING, }; @@ -2186,7 +2186,7 @@ static const struct bpf_func_proto bpf_redirect_proto = { .func = bpf_redirect, .gpl_only = false, .ret_type = RET_INTEGER, - .arg1_type = ARG_ANYTHING, + .arg1_type = ARG_ANYTHING32, .arg2_type = ARG_ANYTHING, }; @@ -2964,7 +2964,7 @@ static const struct bpf_func_proto bpf_skb_change_proto_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, .arg3_type = ARG_ANYTHING, }; @@ -2984,7 +2984,7 @@ static const struct bpf_func_proto bpf_skb_change_type_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, }; static u32 bpf_skb_net_base_len(const struct sk_buff *skb) @@ -3287,7 +3287,7 @@ static const struct bpf_func_proto bpf_skb_change_tail_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, .arg3_type = ARG_ANYTHING, }; @@ -3883,7 +3883,7 @@ static const struct bpf_func_proto bpf_skb_get_tunnel_key_proto = { .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, .arg2_type = ARG_PTR_TO_UNINIT_MEM, - .arg3_type = ARG_CONST_SIZE, + .arg3_type = ARG_CONST_SIZE32, .arg4_type = ARG_ANYTHING, }; @@ -3992,7 +3992,7 @@ static const struct bpf_func_proto bpf_skb_set_tunnel_key_proto = { .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, .arg2_type = ARG_PTR_TO_MEM, - .arg3_type = ARG_CONST_SIZE, + .arg3_type = ARG_CONST_SIZE32, .arg4_type = ARG_ANYTHING, }; @@ -4091,7 +4091,7 @@ BPF_CALL_1(bpf_skb_cgroup_id, const struct sk_buff *, skb) static const struct bpf_func_proto bpf_skb_cgroup_id_proto = { .func = bpf_skb_cgroup_id, .gpl_only = false, - .ret_type = RET_INTEGER, + .ret_type = RET_INTEGER64, .arg1_type = ARG_PTR_TO_CTX, }; @@ -4116,7 +4116,7 @@ BPF_CALL_2(bpf_skb_ancestor_cgroup_id, const struct sk_buff *, skb, int, static const struct bpf_func_proto bpf_skb_ancestor_cgroup_id_proto = { .func = bpf_skb_ancestor_cgroup_id, .gpl_only = false, - .ret_type = RET_INTEGER, + .ret_type = RET_INTEGER64, .arg1_type = ARG_PTR_TO_CTX, .arg2_type = ARG_ANYTHING, }; @@ -4162,7 +4162,7 @@ BPF_CALL_1(bpf_get_socket_cookie, struct sk_buff *, skb) static const struct bpf_func_proto bpf_get_socket_cookie_proto = { .func = bpf_get_socket_cookie, .gpl_only = false, - .ret_type = RET_INTEGER, + .ret_type = RET_INTEGER64, .arg1_type = ARG_PTR_TO_CTX, }; @@ -4174,7 +4174,7 @@ BPF_CALL_1(bpf_get_socket_cookie_sock_addr, struct bpf_sock_addr_kern *, ctx) static const struct bpf_func_proto bpf_get_socket_cookie_sock_addr_proto = { .func = bpf_get_socket_cookie_sock_addr, .gpl_only = false, - .ret_type = RET_INTEGER, + .ret_type = RET_INTEGER64, .arg1_type = ARG_PTR_TO_CTX, }; @@ -4186,7 +4186,7 @@ BPF_CALL_1(bpf_get_socket_cookie_sock_ops, struct bpf_sock_ops_kern *, ctx) static const struct bpf_func_proto bpf_get_socket_cookie_sock_ops_proto = { .func = bpf_get_socket_cookie_sock_ops, .gpl_only = false, - .ret_type = RET_INTEGER, + .ret_type = RET_INTEGER64, .arg1_type = ARG_PTR_TO_CTX, }; From patchwork Fri May 3 10:42:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1094780 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="mMgMVUer"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44wTJP4ZC0z9s9y for ; Fri, 3 May 2019 20:43:49 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727390AbfECKns (ORCPT ); Fri, 3 May 2019 06:43:48 -0400 Received: from mail-wm1-f65.google.com ([209.85.128.65]:36700 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726267AbfECKnq (ORCPT ); Fri, 3 May 2019 06:43:46 -0400 Received: by mail-wm1-f65.google.com with SMTP id p16so6161833wma.1 for ; Fri, 03 May 2019 03:43:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=rUrQK+LR1yYeYJJXQR8C+tLcqrFMjDt/yBJhg92XCr8=; b=mMgMVUer7qHu02Vjfp+qNep/hqX04XpggnUGoB46L1Tsn6XHiP+XBhov9tilbzqXFJ YBTuFrn4FwJTYpbyedZQHZk+A0kPhFUKBpc9ho2hAwsIMyh7n6iG5jDBZohMBlEfItcz rApHHaTZOWv5V+Cgwk4OcErTBjPlNsM64k+yAn+34pTGR2qI/G0KaM4Qk9AlKAX7X5ga Dj3vsZQkPfC2I4TBmSzfiCa5skt9wV0+7P3zZ5utav5/NwHTXmTtrDqjeKGUKFnUPqpJ 6ML7HeBf0YfJckpe2E9lhaKLI4DaGz6njiQysGZaN6TaMoTqGZEhG9TMHqq6cBZfMDn+ oGZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=rUrQK+LR1yYeYJJXQR8C+tLcqrFMjDt/yBJhg92XCr8=; b=NAZIqfQtvsXcfNe0MH9fMO96ZxAU+9BMb05Ek4V/HA8ZR/cZTeTcnBJNtoDBbEOEGF 9xVOI4oeleIdytXM/ARTw8i562DBk15PzaIcbR4ANgcQgdvHWL8VvOW0x1HLLfp3pRhd VSPpYn5AJOR+WsYeuRo+YQzkPTQpkjUU2ThqOPqqWUIQF6mfOQYMGkNWomgZi8w/VAAn RSTBS9LmPkORmc2bvtwsyu6gd8irDRfBXLUrt4TM5IYZczZTgEpAwmEwHfS+C0h/4Xqm uU7N20So1Y+t9mTF7vcAlcCFaeOEm0YWSAOcaEO/IbLSGFdh4IO599n/7tX5vmCvEHDX 8x7Q== X-Gm-Message-State: APjAAAWHNqOjHop3cx1gbWMj+CvXTkuYciChfXVg2x8IC8JUFs77QY7N wqEAsKmpV8R6Y56jG19Lizkn5w== X-Google-Smtp-Source: APXvYqz7peHPI3q/Dr7ios2xWGbBEMLIKmAi21yFrUruiREEysVAYBQ5jqP2VeYnvPuXZkaK7S2f4w== X-Received: by 2002:a1c:2e89:: with SMTP id u131mr5971658wmu.82.1556880224165; Fri, 03 May 2019 03:43:44 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id r29sm1716999wra.56.2019.05.03.03.43.42 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 May 2019 03:43:43 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v6 bpf-next 02/17] bpf: verifier: mark verified-insn with sub-register zext flag Date: Fri, 3 May 2019 11:42:29 +0100 Message-Id: <1556880164-10689-3-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> References: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org eBPF ISA specification requires high 32-bit cleared when low 32-bit sub-register is written. This applies to destination register of ALU32 etc. JIT back-ends must guarantee this semantic when doing code-gen. x86-64 and arm64 ISA has the same semantic, so the corresponding JIT back-end doesn't need to do extra work. However, 32-bit arches (arm, nfp etc.) and some other 64-bit arches (powerpc, sparc etc), need explicit zero extension sequence to meet such semantic. This is important, because for code the following: u64_value = (u64) u32_value ... other uses of u64_value compiler could exploit the semantic described above and save those zero extensions for extending u32_value to u64_value. Hardware, runtime, or BPF JIT back-ends, are responsible for guaranteeing this. Some benchmarks show ~40% sub-register writes out of total insns, meaning ~40% extra code-gen ( could go up to more for some arches which requires two shifts for zero extension) because JIT back-end needs to do extra code-gen for all such instructions. However this is not always necessary in case u32_value is never cast into a u64, which is quite normal in real life program. So, it would be really good if we could identify those places where such type cast happened, and only do zero extensions for them, not for the others. This could save a lot of BPF code-gen. Algo: - Split read flags into READ32 and READ64. - Record indices of instructions that do sub-register def (write). And these indices need to stay with reg state so path pruning and bpf to bpf function call could be handled properly. These indices are kept up to date while doing insn walk. - A full register read on an active sub-register def marks the def insn as needing zero extension on dst register. - A new sub-register write overrides the old one. A new full register write makes the register free of zero extension on dst register. - When propagating read64 during path pruning, also marks def insns whose defs are hanging active sub-register. Reviewed-by: Jakub Kicinski Signed-off-by: Jiong Wang --- include/linux/bpf_verifier.h | 14 ++- kernel/bpf/verifier.c | 213 ++++++++++++++++++++++++++++++++++++++++--- 2 files changed, 211 insertions(+), 16 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 1305ccb..6a0b12c 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -36,9 +36,11 @@ */ enum bpf_reg_liveness { REG_LIVE_NONE = 0, /* reg hasn't been read or written this branch */ - REG_LIVE_READ, /* reg was read, so we're sensitive to initial value */ - REG_LIVE_WRITTEN, /* reg was written first, screening off later reads */ - REG_LIVE_DONE = 4, /* liveness won't be updating this register anymore */ + REG_LIVE_READ32 = 0x1, /* reg was read, so we're sensitive to initial value */ + REG_LIVE_READ64 = 0x2, /* likewise, but full 64-bit content matters */ + REG_LIVE_READ = REG_LIVE_READ32 | REG_LIVE_READ64, + REG_LIVE_WRITTEN = 0x4, /* reg was written first, screening off later reads */ + REG_LIVE_DONE = 0x8, /* liveness won't be updating this register anymore */ }; struct bpf_reg_state { @@ -131,6 +133,11 @@ struct bpf_reg_state { * pointing to bpf_func_state. */ u32 frameno; + /* Tracks subreg definition. The stored value is the insn_idx of the + * writing insn. This is safe because subreg_def is used before any insn + * patching which only happens after main verification finished. + */ + s32 subreg_def; enum bpf_reg_liveness live; }; @@ -232,6 +239,7 @@ struct bpf_insn_aux_data { int ctx_field_size; /* the ctx field size for load insn, maybe 0 */ int sanitize_stack_off; /* stack slot to be cleared */ bool seen; /* this insn was processed by the verifier */ + bool zext_dst; /* this insn zero extend dst reg */ u8 alu_state; /* used in combination with alu_limit */ unsigned int orig_idx; /* original instruction index */ }; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 07ab563..43ea665 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -981,6 +981,7 @@ static void mark_reg_not_init(struct bpf_verifier_env *env, __mark_reg_not_init(regs + regno); } +#define DEF_NOT_SUBREG (-1) static void init_reg_state(struct bpf_verifier_env *env, struct bpf_func_state *state) { @@ -991,6 +992,7 @@ static void init_reg_state(struct bpf_verifier_env *env, mark_reg_not_init(env, regs, i); regs[i].live = REG_LIVE_NONE; regs[i].parent = NULL; + regs[i].subreg_def = DEF_NOT_SUBREG; } /* frame pointer */ @@ -1136,7 +1138,7 @@ static int check_subprogs(struct bpf_verifier_env *env) */ static int mark_reg_read(struct bpf_verifier_env *env, const struct bpf_reg_state *state, - struct bpf_reg_state *parent) + struct bpf_reg_state *parent, u8 flag) { bool writes = parent == state->parent; /* Observe write marks */ int cnt = 0; @@ -1151,17 +1153,26 @@ static int mark_reg_read(struct bpf_verifier_env *env, parent->var_off.value, parent->off); return -EFAULT; } - if (parent->live & REG_LIVE_READ) + /* The first condition is more likely to be true than the + * second, checked it first. + */ + if ((parent->live & REG_LIVE_READ) == flag || + parent->live & REG_LIVE_READ64) /* The parentage chain never changes and * this parent was already marked as LIVE_READ. * There is no need to keep walking the chain again and * keep re-marking all parents as LIVE_READ. * This case happens when the same register is read * multiple times without writes into it in-between. + * Also, if parent has the stronger REG_LIVE_READ64 set, + * then no need to set the weak REG_LIVE_READ32. */ break; /* ... then we depend on parent's value */ - parent->live |= REG_LIVE_READ; + parent->live |= flag; + /* REG_LIVE_READ64 overrides REG_LIVE_READ32. */ + if (flag == REG_LIVE_READ64) + parent->live &= ~REG_LIVE_READ32; state = parent; parent = state->parent; writes = true; @@ -1173,12 +1184,146 @@ static int mark_reg_read(struct bpf_verifier_env *env, return 0; } +static bool helper_call_arg64(struct bpf_verifier_env *env, int func_id, + u32 regno) +{ + /* get_func_proto must succeed, other it should have been rejected + * early inside check_helper_call. + */ + const struct bpf_func_proto *fn = + env->ops->get_func_proto(func_id, env->prog); + enum bpf_arg_type arg_type; + + switch (regno) { + case BPF_REG_1: + arg_type = fn->arg1_type; + break; + case BPF_REG_2: + arg_type = fn->arg2_type; + break; + case BPF_REG_3: + arg_type = fn->arg3_type; + break; + case BPF_REG_4: + arg_type = fn->arg4_type; + break; + case BPF_REG_5: + arg_type = fn->arg5_type; + break; + default: + arg_type = ARG_DONTCARE; + } + + return arg_type != ARG_CONST_SIZE32 && + arg_type != ARG_CONST_SIZE32_OR_ZERO && + arg_type != ARG_ANYTHING32; +} + +/* This function is supposed to be used by the following 32-bit optimization + * code only. It returns TRUE if the source or destination register operates + * on 64-bit, otherwise return FALSE. + */ +static bool is_reg64(struct bpf_verifier_env *env, struct bpf_insn *insn, + u32 regno, struct bpf_reg_state *reg, enum reg_arg_type t) +{ + u8 code, class, op; + + code = insn->code; + class = BPF_CLASS(code); + op = BPF_OP(code); + if (class == BPF_JMP) { + /* BPF_EXIT for "main" will reach here. Return TRUE + * conservatively. + */ + if (op == BPF_EXIT) + return true; + if (op == BPF_CALL) { + /* BPF to BPF call will reach here because of marking + * caller saved clobber with DST_OP_NO_MARK for which we + * don't care the register def because they are anyway + * marked as NOT_INIT already. + */ + if (insn->src_reg == BPF_PSEUDO_CALL) + return false; + /* Helper call will reach here because of arg type + * check. + */ + if (t == SRC_OP) + return helper_call_arg64(env, insn->imm, regno); + + return false; + } + } + + if (class == BPF_ALU64 || class == BPF_JMP || + /* BPF_END always use BPF_ALU class. */ + (class == BPF_ALU && op == BPF_END && insn->imm == 64)) + return true; + + if (class == BPF_ALU || class == BPF_JMP32) + return false; + + if (class == BPF_LDX) { + if (t != SRC_OP) + return BPF_SIZE(code) == BPF_DW; + /* LDX source must be ptr. */ + return true; + } + + if (class == BPF_STX) { + if (reg->type != SCALAR_VALUE) + return true; + return BPF_SIZE(code) == BPF_DW; + } + + if (class == BPF_LD) { + u8 mode = BPF_MODE(code); + + /* LD_IMM64 */ + if (mode == BPF_IMM) + return true; + + /* Both LD_IND and LD_ABS return 32-bit data. */ + if (t != SRC_OP) + return false; + + /* Implicit ctx ptr. */ + if (regno == BPF_REG_6) + return true; + + /* Explicit source could be any width. */ + return true; + } + + if (class == BPF_ST) + /* The only source register for BPF_ST is a ptr. */ + return true; + + /* Conservatively return true at default. */ + return true; +} + +static void mark_insn_zext(struct bpf_verifier_env *env, + struct bpf_reg_state *reg) +{ + s32 def_idx = reg->subreg_def; + + if (def_idx == DEF_NOT_SUBREG) + return; + + env->insn_aux_data[def_idx].zext_dst = true; + /* The dst will be zero extended, so won't be sub-register anymore. */ + reg->subreg_def = DEF_NOT_SUBREG; +} + static int check_reg_arg(struct bpf_verifier_env *env, u32 regno, enum reg_arg_type t) { struct bpf_verifier_state *vstate = env->cur_state; struct bpf_func_state *state = vstate->frame[vstate->curframe]; + struct bpf_insn *insn = env->prog->insnsi + env->insn_idx; struct bpf_reg_state *reg, *regs = state->regs; + bool rw64; if (regno >= MAX_BPF_REG) { verbose(env, "R%d is invalid\n", regno); @@ -1186,6 +1331,7 @@ static int check_reg_arg(struct bpf_verifier_env *env, u32 regno, } reg = ®s[regno]; + rw64 = is_reg64(env, insn, regno, reg, t); if (t == SRC_OP) { /* check whether register used as source operand can be read */ if (reg->type == NOT_INIT) { @@ -1196,7 +1342,11 @@ static int check_reg_arg(struct bpf_verifier_env *env, u32 regno, if (regno == BPF_REG_FP) return 0; - return mark_reg_read(env, reg, reg->parent); + if (rw64) + mark_insn_zext(env, reg); + + return mark_reg_read(env, reg, reg->parent, + rw64 ? REG_LIVE_READ64 : REG_LIVE_READ32); } else { /* check whether register used as dest operand can be written to */ if (regno == BPF_REG_FP) { @@ -1204,6 +1354,7 @@ static int check_reg_arg(struct bpf_verifier_env *env, u32 regno, return -EACCES; } reg->live |= REG_LIVE_WRITTEN; + reg->subreg_def = rw64 ? DEF_NOT_SUBREG : env->insn_idx; if (t == DST_OP) mark_reg_unknown(env, regs, regno); } @@ -1383,7 +1534,8 @@ static int check_stack_read(struct bpf_verifier_env *env, state->regs[value_regno].live |= REG_LIVE_WRITTEN; } mark_reg_read(env, ®_state->stack[spi].spilled_ptr, - reg_state->stack[spi].spilled_ptr.parent); + reg_state->stack[spi].spilled_ptr.parent, + REG_LIVE_READ64); return 0; } else { int zeros = 0; @@ -1400,7 +1552,9 @@ static int check_stack_read(struct bpf_verifier_env *env, return -EACCES; } mark_reg_read(env, ®_state->stack[spi].spilled_ptr, - reg_state->stack[spi].spilled_ptr.parent); + reg_state->stack[spi].spilled_ptr.parent, + size == BPF_REG_SIZE + ? REG_LIVE_READ64 : REG_LIVE_READ32); if (value_regno >= 0) { if (zeros == size) { /* any size read into register is zero extended, @@ -2109,6 +2263,12 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn value_regno); if (reg_type_may_be_null(reg_type)) regs[value_regno].id = ++env->id_gen; + /* A load of ctx field could have different + * actual load size with the one encoded in the + * insn. When the dst is PTR, it is for sure not + * a sub-register. + */ + regs[value_regno].subreg_def = DEF_NOT_SUBREG; } regs[value_regno].type = reg_type; } @@ -2368,7 +2528,9 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno, * the whole slot to be marked as 'read' */ mark_reg_read(env, &state->stack[spi].spilled_ptr, - state->stack[spi].spilled_ptr.parent); + state->stack[spi].spilled_ptr.parent, + access_size == BPF_REG_SIZE + ? REG_LIVE_READ64 : REG_LIVE_READ32); } return update_stack_depth(env, state, min_off); } @@ -3337,10 +3499,16 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn check_reg_arg(env, caller_saved[i], DST_OP_NO_MARK); } + /* assume helper call has returned 64-bit value. */ + regs[BPF_REG_0].subreg_def = DEF_NOT_SUBREG; + /* update return register (already marked as written above) */ if (fn->ret_type == RET_INTEGER || fn->ret_type == RET_INTEGER64) { /* sets type to SCALAR_VALUE */ mark_reg_unknown(env, regs, BPF_REG_0); + /* RET_INTEGER returns sub-register. */ + if (fn->ret_type == RET_INTEGER) + regs[BPF_REG_0].subreg_def = insn_idx; } else if (fn->ret_type == RET_VOID) { regs[BPF_REG_0].type = NOT_INIT; } else if (fn->ret_type == RET_PTR_TO_MAP_VALUE_OR_NULL || @@ -4268,6 +4436,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) */ *dst_reg = *src_reg; dst_reg->live |= REG_LIVE_WRITTEN; + dst_reg->subreg_def = DEF_NOT_SUBREG; } else { /* R1 = (u32) R2 */ if (is_pointer_value(env, insn->src_reg)) { @@ -4278,6 +4447,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn) } else if (src_reg->type == SCALAR_VALUE) { *dst_reg = *src_reg; dst_reg->live |= REG_LIVE_WRITTEN; + dst_reg->subreg_def = env->insn_idx; } else { mark_reg_unknown(env, regs, insn->dst_reg); @@ -5341,6 +5511,8 @@ static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn) * Already marked as written above. */ mark_reg_unknown(env, regs, BPF_REG_0); + /* ld_abs load up to 32-bit skb data. */ + regs[BPF_REG_0].subreg_def = env->insn_idx; return 0; } @@ -6281,20 +6453,33 @@ static bool states_equal(struct bpf_verifier_env *env, return true; } +/* Return 0 if no propagation happened. Return negative error code if error + * happened. Otherwise, return the propagated bit. + */ static int propagate_liveness_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, struct bpf_reg_state *parent_reg) { + u8 parent_flag = parent_reg->live & REG_LIVE_READ; + u8 flag = reg->live & REG_LIVE_READ; int err; - if (parent_reg->live & REG_LIVE_READ || !(reg->live & REG_LIVE_READ)) + /* When comes here, read flags of PARENT_REG or REG could be any of + * REG_LIVE_READ64, REG_LIVE_READ32, REG_LIVE_NONE. There is no need + * of propagation if PARENT_REG has strongest REG_LIVE_READ64. + */ + if (parent_flag == REG_LIVE_READ64 || + /* Or if there is no read flag from REG. */ + !flag || + /* Or if the read flag from REG is the same as PARENT_REG. */ + parent_flag == flag) return 0; - err = mark_reg_read(env, reg, parent_reg); + err = mark_reg_read(env, reg, parent_reg, flag); if (err) return err; - return 0; + return flag; } /* A write screens off any subsequent reads; but write marks come from the @@ -6328,8 +6513,10 @@ static int propagate_liveness(struct bpf_verifier_env *env, for (i = frame < vstate->curframe ? BPF_REG_6 : 0; i < BPF_REG_FP; i++) { err = propagate_liveness_reg(env, &state_reg[i], &parent_reg[i]); - if (err) + if (err < 0) return err; + if (err == REG_LIVE_READ64) + mark_insn_zext(env, &parent_reg[i]); } /* Propagate stack slots. */ @@ -6339,11 +6526,11 @@ static int propagate_liveness(struct bpf_verifier_env *env, state_reg = &state->stack[i].spilled_ptr; err = propagate_liveness_reg(env, state_reg, parent_reg); - if (err) + if (err < 0) return err; } } - return err; + return 0; } static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) From patchwork Fri May 3 10:42:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1094786 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="oxS58Qb+"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44wTJW0WTpz9s6w for ; Fri, 3 May 2019 20:43:55 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727487AbfECKnv (ORCPT ); Fri, 3 May 2019 06:43:51 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:39257 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727221AbfECKnr (ORCPT ); Fri, 3 May 2019 06:43:47 -0400 Received: by mail-wr1-f66.google.com with SMTP id a9so7254713wrp.6 for ; Fri, 03 May 2019 03:43:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=fkIQCc7bnGnQaoJb7qAmxjkc3iakEC8QnjdfLr6L3FA=; b=oxS58Qb+6m7Oamg+bOJI897rvdhZST5tGa36M2wDtrzF/bV7X/O7xlLgaHHxYDoXSj JVFDcQRM6BbuOqbjbn/Qd64HtcIsrgGRcO+FWOAh6zrYlgu4CvJw0x9tbsGbd+9dYbAT Lnxd9Fat4JQ5UQcbHe/CeO5Whf0f3vMAKkgaY3ZxhAuaF4kH1KD8njiybWDG1yz1mkOc +bympP1itn0xgW+DkOgc1bwxgXOI86b1PG6T9J2tIri7FzheCGF0vsOXX8D3tz/IEuYu qglxt+c0pgx0hkOpVBNs4KytqEF9ZW1vFsChxoEdi7SpAvd0ELR3g+utEFXM7rFWCJ54 uJJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=fkIQCc7bnGnQaoJb7qAmxjkc3iakEC8QnjdfLr6L3FA=; b=gMylnnZWRAbR6oC36BZ+EPB9wyxDKpVdzM5xiGw+ZIlI5bH7ExipPnzrB9qxCi0/8F CZIpefNegimP+QTJ4TGq7ME4luyEWnPBNHkga/2BylUxt6fPRxazcKOzbtu7vXD4d3dj 24tYStBmdYRE1xS3cDHqDoi0EqUNdc2hlLtdcVkVEHVPxiXs2wquvJ/A9afCx6cGgzOy 3QUH15oJqK4Dj3UVSlpdLlPbDD7KnY2Pmq/HMtd4ag5izc3UWPk/0/g02h1Wx9bl0pKD UVe39R8PzI+ruCxoT+H9MEAIC/n0GUjIWC/RBfn7J6rPivKDmAE61dj8WHyaqtirgpKq SBPw== X-Gm-Message-State: APjAAAXhaM7DzTSY5HOfT/bXZJ9B3lUZ5b28mP4MumA3i7EuTz+I2WI+ jsW25vncv/x/kNurxUMkeZtGNA== X-Google-Smtp-Source: APXvYqxLTDfgClIS3XLsjBBBfUqkw7COkO5O6PzBL9naymrwtdt7LEaTWe6JYlwYEdBSNfBoUdfOSQ== X-Received: by 2002:adf:92e2:: with SMTP id 89mr6792425wrn.53.1556880226055; Fri, 03 May 2019 03:43:46 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id r29sm1716999wra.56.2019.05.03.03.43.44 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 May 2019 03:43:44 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v6 bpf-next 03/17] bpf: verifier: mark patched-insn with sub-register zext flag Date: Fri, 3 May 2019 11:42:30 +0100 Message-Id: <1556880164-10689-4-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> References: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org Patched insns do not go through generic verification, therefore doesn't has zero extension information collected during insn walking. We don't bother analyze them at the moment, for any sub-register def comes from them, just conservatively mark it as needing zero extension. Signed-off-by: Jiong Wang --- kernel/bpf/verifier.c | 37 +++++++++++++++++++++++++++++++++---- 1 file changed, 33 insertions(+), 4 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 43ea665..b43e8a2 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1303,6 +1303,24 @@ static bool is_reg64(struct bpf_verifier_env *env, struct bpf_insn *insn, return true; } +/* Return TRUE if INSN doesn't have explicit value define. */ +static bool insn_no_def(struct bpf_insn *insn) +{ + u8 class = BPF_CLASS(insn->code); + + return (class == BPF_JMP || class == BPF_JMP32 || + class == BPF_STX || class == BPF_ST); +} + +/* Return TRUE if INSN has defined any 32-bit value explicitly. */ +static bool insn_has_def32(struct bpf_verifier_env *env, struct bpf_insn *insn) +{ + if (insn_no_def(insn)) + return false; + + return !is_reg64(env, insn, insn->dst_reg, NULL, DST_OP); +} + static void mark_insn_zext(struct bpf_verifier_env *env, struct bpf_reg_state *reg) { @@ -7306,14 +7324,23 @@ static void convert_pseudo_ld_imm64(struct bpf_verifier_env *env) * insni[off, off + cnt). Adjust corresponding insn_aux_data by copying * [0, off) and [off, end) to new locations, so the patched range stays zero */ -static int adjust_insn_aux_data(struct bpf_verifier_env *env, u32 prog_len, - u32 off, u32 cnt) +static int adjust_insn_aux_data(struct bpf_verifier_env *env, + struct bpf_prog *new_prog, u32 off, u32 cnt) { struct bpf_insn_aux_data *new_data, *old_data = env->insn_aux_data; + struct bpf_insn *insn = new_prog->insnsi; + u32 prog_len; int i; + /* aux info at OFF always needs adjustment, no matter fast path + * (cnt == 1) is taken or not. There is no guarantee INSN at OFF is the + * original insn at old prog. + */ + old_data[off].zext_dst = insn_has_def32(env, insn + off + cnt - 1); + if (cnt == 1) return 0; + prog_len = new_prog->len; new_data = vzalloc(array_size(prog_len, sizeof(struct bpf_insn_aux_data))); if (!new_data) @@ -7321,8 +7348,10 @@ static int adjust_insn_aux_data(struct bpf_verifier_env *env, u32 prog_len, memcpy(new_data, old_data, sizeof(struct bpf_insn_aux_data) * off); memcpy(new_data + off + cnt - 1, old_data + off, sizeof(struct bpf_insn_aux_data) * (prog_len - off - cnt + 1)); - for (i = off; i < off + cnt - 1; i++) + for (i = off; i < off + cnt - 1; i++) { new_data[i].seen = true; + new_data[i].zext_dst = insn_has_def32(env, insn + i); + } env->insn_aux_data = new_data; vfree(old_data); return 0; @@ -7355,7 +7384,7 @@ static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 of env->insn_aux_data[off].orig_idx); return NULL; } - if (adjust_insn_aux_data(env, new_prog->len, off, len)) + if (adjust_insn_aux_data(env, new_prog, off, len)) return NULL; adjust_subprog_starts(env, off, len); return new_prog; From patchwork Fri May 3 10:42:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1094783 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="v4ZxxQ1K"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44wTJR6yPTz9s6w for ; Fri, 3 May 2019 20:43:51 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727483AbfECKnv (ORCPT ); Fri, 3 May 2019 06:43:51 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:35988 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727261AbfECKnt (ORCPT ); Fri, 3 May 2019 06:43:49 -0400 Received: by mail-wr1-f68.google.com with SMTP id o4so7274519wra.3 for ; Fri, 03 May 2019 03:43:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=K4gLqJ4SgDOsgXMmICWdVA0zCxYikPtUo1d3XLqhgFA=; b=v4ZxxQ1KTyQ5O3GeNw5PvYYAdDrhFj+uHrpzgQVuLgBCOWg8q4gl1TQAPHFVHG8G0M TEKqIFHWoSQPaRAhxznmu7UERdvAqkSOVIpwS83PoA2vHPp3SsDFBgZJRCYdwHSUbpS0 c2FaEXJdqSMJxJk55GacCwDsKNicaaN9KTs/SktNyMOP4s7IPRBoOdEKCJfTbbiX7c3P bAVEWJU7Ozt2t+tRSpLU0LjDSWyr5GTGu5HJPn2zPb/ac21AR5SJpAZktY659TS8Fl3q snq8Dh7vRZbcw8VWDRNvClUyOG67nOXuUTAt8W8j57fXrW7nzfz1+r3TQQySTwFpri5z 5hyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=K4gLqJ4SgDOsgXMmICWdVA0zCxYikPtUo1d3XLqhgFA=; b=hC8M7rj117DT0j6V/0njYXWZ5lVsWgXAqmm+Ckr+e7KgsYVpTVBGkcvGBGvsHRsKQA wY+gtHAoH1Trn9jcPQhiBs61+P6+C2JaAJ7kjKSiqqPn/0X8nWdlegoA/Amsqx9NXIgQ YYRSNNZb81rvqH5Klp3B6/xXptST+W6ub/0nhodTgVPWCycP/xHykS4Ocdy6mTDHU57c BuEW/KFaxf4/DJZZJVvw1ESau19pLcQT5J8wySZ/hkTVGT6e593JfDW0zh53SPYQSJtj IJdi+mo2SXmj9CPUvbm/mzXIwkBjfrkhnRzpWDpWDjpOG8j+7ihzrlHfVCBjBjy/F+b3 xTNw== X-Gm-Message-State: APjAAAWVgLUjr3/kHxITtHvjdDLPOe4zKc6ioxLh/TIUM7YvHXLXqq/t +pwV4Y4XlQBE9vu0kOg0OqtaYV7Mu/4= X-Google-Smtp-Source: APXvYqxmES30jzauIQvZznWOLf6CJ5RhObBl0zMx451X3bpaaoUw5IVsoYJ1tggNqzvt+fwqfUwyeQ== X-Received: by 2002:a5d:4942:: with SMTP id r2mr6157661wrs.159.1556880227408; Fri, 03 May 2019 03:43:47 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id r29sm1716999wra.56.2019.05.03.03.43.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 May 2019 03:43:46 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v6 bpf-next 04/17] bpf: introduce new alu insn BPF_ZEXT for explicit zero extension Date: Fri, 3 May 2019 11:42:31 +0100 Message-Id: <1556880164-10689-5-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> References: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org This patch introduce new alu32 insn BPF_ZEXT, and allocate the unused opcode 0xe0 to it. Compared with the other alu32 insns, zero extension on low 32-bit is the only semantics for this instruction. It also allows various JIT back-ends to do optimal zero extension code-gen. BPF_ZEXT is supposed to be encoded with BPF_ALU only, and is supposed to be generated by the latter 32-bit optimization code inside verifier for those arches that do not support hardware implicit zero extension only. It is not supposed to be used in user's program directly at the moment. Therefore, no need to recognize it inside generic verification code. It just need to be supported for execution on interpreter or related JIT back-ends. Signed-off-by: Jiong Wang --- Documentation/networking/filter.txt | 10 ++++++++++ include/uapi/linux/bpf.h | 3 +++ kernel/bpf/core.c | 4 ++++ tools/include/uapi/linux/bpf.h | 3 +++ 4 files changed, 20 insertions(+) diff --git a/Documentation/networking/filter.txt b/Documentation/networking/filter.txt index 319e5e0..1cb3e42 100644 --- a/Documentation/networking/filter.txt +++ b/Documentation/networking/filter.txt @@ -903,6 +903,16 @@ If BPF_CLASS(code) == BPF_ALU or BPF_ALU64 [ in eBPF ], BPF_OP(code) is one of: BPF_MOV 0xb0 /* eBPF only: mov reg to reg */ BPF_ARSH 0xc0 /* eBPF only: sign extending shift right */ BPF_END 0xd0 /* eBPF only: endianness conversion */ + BPF_ZEXT 0xe0 /* eBPF BPF_ALU only: zero-extends low 32-bit */ + +Compared with BPF_ALU | BPF_MOV which zero-extends low 32-bit implicitly, +BPF_ALU | BPF_ZEXT zero-extends low 32-bit explicitly. Such zero extension is +not the main semantics for the prior, but is for the latter. Therefore, JIT +optimizer could optimize out the zero extension for the prior when it is +concluded safe to do so, but should never do such optimization for the latter. +LLVM compiler won't generate BPF_ZEXT, and hand written assembly is not supposed +to use it. Verifier 32-bit optimization pass, which removes zero extension +semantics from the other BPF_ALU instructions, is the only place generates it. If BPF_CLASS(code) == BPF_JMP or BPF_JMP32 [ in eBPF ], BPF_OP(code) is one of: diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 72336ba..22ccdf4 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -32,6 +32,9 @@ #define BPF_FROM_LE BPF_TO_LE #define BPF_FROM_BE BPF_TO_BE +/* zero extend low 32-bit */ +#define BPF_ZEXT 0xe0 + /* jmp encodings */ #define BPF_JNE 0x50 /* jump != */ #define BPF_JLT 0xa0 /* LT is unsigned, '<' */ diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 2792eda..ee8703d 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1152,6 +1152,7 @@ EXPORT_SYMBOL_GPL(__bpf_call_base); INSN_2(ALU, NEG), \ INSN_3(ALU, END, TO_BE), \ INSN_3(ALU, END, TO_LE), \ + INSN_2(ALU, ZEXT), \ /* Immediate based. */ \ INSN_3(ALU, ADD, K), \ INSN_3(ALU, SUB, K), \ @@ -1352,6 +1353,9 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack) ALU64_NEG: DST = -DST; CONT; + ALU_ZEXT: + DST = (u32) DST; + CONT; ALU_MOV_X: DST = (u32) SRC; CONT; diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 72336ba..22ccdf4 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -32,6 +32,9 @@ #define BPF_FROM_LE BPF_TO_LE #define BPF_FROM_BE BPF_TO_BE +/* zero extend low 32-bit */ +#define BPF_ZEXT 0xe0 + /* jmp encodings */ #define BPF_JNE 0x50 /* jump != */ #define BPF_JLT 0xa0 /* LT is unsigned, '<' */ From patchwork Fri May 3 10:42:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1094782 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="BPQmM9eg"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44wTJR3yJsz9s9y for ; Fri, 3 May 2019 20:43:51 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727428AbfECKnv (ORCPT ); Fri, 3 May 2019 06:43:51 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:45378 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726932AbfECKnu (ORCPT ); Fri, 3 May 2019 06:43:50 -0400 Received: by mail-wr1-f68.google.com with SMTP id s15so7209018wra.12 for ; Fri, 03 May 2019 03:43:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=o6b5gCI0kQsMVMTRnly2wwGRw9RnWLxNorJTJW60taw=; b=BPQmM9egHcjPaM66UAArqH6p6Z/HPl6MCnPrHPkE9wr+vYaUuBjemmgniyQjTd8p64 2q12e6eh5/C9/xDqNcpsZ+/WUsZ7/HcxxkmWY3n44dBp1S/S1T8TZpQwEgAPzpa2WcVP QEdcM0aHjpINIX/JSXMxNSdFaLKZSmE0FF7BzLhzltDuX08247OBffzJOY1fvtLdPZ4K NQ78+HprHh9ydnoPJOer0V1fxZNAVpVLFic82B1Rcdesg8V+l7UzndQ3W4KTTLMcBTry 0ZKNujs4mL9yM/wrtIelXTuI52O8eAIx7ltJvHJILLZZpV0ujKgCrr9m6lwArHpTYJXK pCGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=o6b5gCI0kQsMVMTRnly2wwGRw9RnWLxNorJTJW60taw=; b=WJWVzfaXFflBJ2Z4Q0Rk8c4umAumJh7dXGihPyjcHp2mB/1blv2MApGKZpBitI+BM0 a0gep4ovc6zVPINaAeTPXyfgV4vEAm5Tfb8ygYrCmWH2Goq8NbxLeqRnIF0W/enL3e34 2l3vbRZ+9+ICiEz5q6Ak7qNqReCZqVgU9HgXzwd4f8chtRS57TuA6hMCQQzlpv2smoZh ync0jlRQykkZ7TI46rSPu5vy+Wij8KRo4EA+wROLNlgLNHc9MEpWsrqjY0RE0JdjFjUr DZW2JRVZMELNXcgrbPGIKfAp8Mk314uUHpgx3FpwKVkPe1JEoBgSg1FObu3lS8wBWdwl e3Uw== X-Gm-Message-State: APjAAAUILVwxUZVmzuUxnSXSB6HZpTd5WcRK9qY4AtQdYowAQpvdTFQ1 c1/pVfOJ3Svtz+OjCdlTBw3FDg== X-Google-Smtp-Source: APXvYqymzev1szgI437Z8oEdFLBAFsO5nuGw1E6XOdwamRjM1CvQ4pzTO08Voc7rJ+sIrUWYLKhxRw== X-Received: by 2002:a5d:67cb:: with SMTP id n11mr6397395wrw.3.1556880228379; Fri, 03 May 2019 03:43:48 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id r29sm1716999wra.56.2019.05.03.03.43.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 May 2019 03:43:47 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v6 bpf-next 05/17] bpf: verifier: insert BPF_ZEXT according to zext analysis result Date: Fri, 3 May 2019 11:42:32 +0100 Message-Id: <1556880164-10689-6-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> References: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org After previous patches, verifier has marked those instructions that really need zero extension on dst_reg. It is then for all back-ends to decide how to use such information to eliminate unnecessary zero extension code-gen during JIT compilation. One approach is: 1. Verifier insert explicit zero extension for those instructions that need zero extension. 2. All JIT back-ends do NOT generate zero extension for sub-register write any more. The good thing for this approach is no major change on JIT back-end interface, all back-ends could get this optimization. However, only those back-ends that do not have hardware zero extension want this optimization. For back-ends like x86_64 and AArch64, there is hardware support, so zext insertion should be disabled. This patch introduces new target hook "bpf_jit_hardware_zext" which is default true, meaning the underlying hardware will do zero extension implicitly, therefore zext insertion by verifier will be disabled. Once a back-end overrides this hook to false, then verifier will insert BPF_ZEXT to clear high 32-bit of definitions when necessary. Offload targets do not use this native target hook, instead, they could get the optimization results using bpf_prog_offload_ops.finalize. Reviewed-by: Jakub Kicinski Signed-off-by: Jiong Wang --- include/linux/bpf.h | 1 + include/linux/filter.h | 1 + kernel/bpf/core.c | 8 ++++++++ kernel/bpf/verifier.c | 40 ++++++++++++++++++++++++++++++++++++++++ 4 files changed, 50 insertions(+) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 11a5fb9..cf3c3f3 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -373,6 +373,7 @@ struct bpf_prog_aux { u32 id; u32 func_cnt; /* used by non-func prog as the number of func progs */ u32 func_idx; /* 0 for non-func prog, the index in func array for func prog */ + bool verifier_zext; /* Zero extensions has been inserted by verifier. */ bool offload_requested; struct bpf_prog **func; void *jit_data; /* JIT specific data. arch dependent */ diff --git a/include/linux/filter.h b/include/linux/filter.h index fb0edad..8750657 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -821,6 +821,7 @@ u64 __bpf_call_base(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5); struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog); void bpf_jit_compile(struct bpf_prog *prog); +bool bpf_jit_hardware_zext(void); bool bpf_helper_changes_pkt_data(void *func); static inline bool bpf_dump_raw_ok(void) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index ee8703d..9754346 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -2095,6 +2095,14 @@ bool __weak bpf_helper_changes_pkt_data(void *func) return false; } +/* Return TRUE is the target hardware of JIT will do zero extension to high bits + * when writing to low 32-bit of one register. Otherwise, return FALSE. + */ +bool __weak bpf_jit_hardware_zext(void) +{ + return true; +} + /* To execute LD_ABS/LD_IND instructions __bpf_prog_run() may call * skb_copy_bits(), so provide a weak definition of it for NET-less config. */ diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index b43e8a2..999da02 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7648,6 +7648,37 @@ static int opt_remove_nops(struct bpf_verifier_env *env) return 0; } +static int opt_subreg_zext_lo32(struct bpf_verifier_env *env) +{ + struct bpf_insn_aux_data *aux = env->insn_aux_data; + struct bpf_insn *insns = env->prog->insnsi; + int i, delta = 0, len = env->prog->len; + struct bpf_insn zext_patch[2]; + struct bpf_prog *new_prog; + + zext_patch[1] = BPF_ALU32_IMM(BPF_ZEXT, 0, 0); + for (i = 0; i < len; i++) { + int adj_idx = i + delta; + struct bpf_insn insn; + + if (!aux[adj_idx].zext_dst) + continue; + + insn = insns[adj_idx]; + zext_patch[0] = insn; + zext_patch[1].dst_reg = insn.dst_reg; + new_prog = bpf_patch_insn_data(env, adj_idx, zext_patch, 2); + if (!new_prog) + return -ENOMEM; + env->prog = new_prog; + insns = new_prog->insnsi; + aux = env->insn_aux_data; + delta += 2; + } + + return 0; +} + /* convert load instructions that access fields of a context type into a * sequence of instructions that access fields of the underlying structure: * struct __sk_buff -> struct sk_buff @@ -8499,6 +8530,15 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, if (ret == 0) ret = fixup_bpf_calls(env); + /* do 32-bit optimization after insn patching has done so those patched + * insns could be handled correctly. + */ + if (ret == 0 && !bpf_jit_hardware_zext() && + !bpf_prog_is_dev_bound(env->prog->aux)) { + ret = opt_subreg_zext_lo32(env); + env->prog->aux->verifier_zext = !ret; + } + if (ret == 0) ret = fixup_call_args(env); From patchwork Fri May 3 10:42:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1094787 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="S5kRzRY+"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44wTJX4Ypfz9s6w for ; Fri, 3 May 2019 20:43:56 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727544AbfECKnz (ORCPT ); Fri, 3 May 2019 06:43:55 -0400 Received: from mail-wm1-f67.google.com ([209.85.128.67]:53684 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727398AbfECKnv (ORCPT ); Fri, 3 May 2019 06:43:51 -0400 Received: by mail-wm1-f67.google.com with SMTP id q15so6528569wmf.3 for ; Fri, 03 May 2019 03:43:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=b4GJQTWDVBJq1m3fxHOO42uDLODxtF5vCgGQ8/DfFII=; b=S5kRzRY+QfxvmfipSRFh+a3ONIYB86FVux7cp0xsmRw94SVs7xNQzD4hORPzSZQpDF 2ZJhbhQSQC9hM+pfUho/KAggUvNNe2IJ8c06HMHP5qU6zM6XngN5jzIoWHvk1EPot4xN X/6+4avm0uaYDLk/gq5+txkCEPW6hIEcbI2h8TVAZaAciNlkrjAe61xXkYiWWQZT6mN1 iFUy1nhbhbkYFRoYJF1h7PTwgLS9odaOo8Jj9R1Zmew10b0DDKEauZQmLkgI/b7J75DV Ojqrz/aV8bTF6P0OUO9AUr7WefAnkt3bJ90VPIHIe+nRLIX6orxN5BQkBwTEHAfddcTY /xLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=b4GJQTWDVBJq1m3fxHOO42uDLODxtF5vCgGQ8/DfFII=; b=cCdj0AxRSXdmqQ+joDXjybwb46m50nl7MYqGDHJdfQjaqaYIQowTa7qouvALr/XD5d hqWGtYqfD1xr42/vhDzbOekWTcwRXsr0sqDZ3GM40tbbeYMzvfKObMee+mJunaZE5FN9 byMNM2yolUr+fN1vgDodi+zhpi2mL6kNZbrLn3OmzRkYciQPTrE2YzyHp1zG9qafYKlE vMnYR+ldApqGU3a0o5WFQT3dU0Vg92/FG5SXeQYSyjpCLACO+uz7uNnpxgY73eYG1c56 OSl58oVKeJRqSS6AmUz1nRZiy34ToCVid0ECh3PGaAcvIzGp8d2zzsr0+RjatTBOAeVZ GcnA== X-Gm-Message-State: APjAAAXJCas68SiysgNqxUj/Z89kgnyyKe28uwVMLMpnhrA3z+Uxvwnc gcR8NbN0LAy+6njwxQWClphJjaddt5E= X-Google-Smtp-Source: APXvYqy6DY1350w3NQQn+1v8y34u3WPyrMeqTT1xIjB94hwXMLEAGF+Wap4+2h+8M2FMJ2govK7+og== X-Received: by 2002:a7b:cb04:: with SMTP id u4mr5755752wmj.0.1556880229541; Fri, 03 May 2019 03:43:49 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id r29sm1716999wra.56.2019.05.03.03.43.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 May 2019 03:43:48 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v6 bpf-next 06/17] bpf: introduce new bpf prog load flags "BPF_F_TEST_RND_HI32" Date: Fri, 3 May 2019 11:42:33 +0100 Message-Id: <1556880164-10689-7-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> References: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org x86_64 and AArch64 perhaps are two arches that running bpf testsuite frequently, however the zero extension insertion pass is not enabled for them because of their hardware support. It is critical to guarantee the pass correction as it is supposed to be enabled at default for a couple of other arches, for example PowerPC, SPARC, arm, NFP etc. Therefore, it would be very useful if there is a way to test this pass on for example x86_64. The test methodology employed by this set is "poisoning" useless bits. High 32-bit of a definition is randomized if it is identified as not used by any later instruction. Such randomization is only enabled under testing mode which is gated by the new bpf prog load flags "BPF_F_TEST_RND_HI32". Suggested-by: Alexei Starovoitov Signed-off-by: Jiong Wang --- include/uapi/linux/bpf.h | 18 ++++++++++++++++++ kernel/bpf/syscall.c | 4 +++- tools/include/uapi/linux/bpf.h | 18 ++++++++++++++++++ 3 files changed, 39 insertions(+), 1 deletion(-) diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 22ccdf4..1bf32c3 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -263,6 +263,24 @@ enum bpf_attach_type { */ #define BPF_F_ANY_ALIGNMENT (1U << 1) +/* BPF_F_TEST_RND_HI32 is used in BPF_PROG_LOAD command for testing purpose. + * Verifier does sub-register def/use analysis and identifies instructions whose + * def only matters for low 32-bit, high 32-bit is never referenced later + * through implicit zero extension. Therefore verifier notifies JIT back-ends + * that it is safe to ignore clearing high 32-bit for these instructions. This + * saves some back-ends a lot of code-gen. However such optimization is not + * necessary on some arches, for example x86_64, arm64 etc, whose JIT back-ends + * hence hasn't used verifier's analysis result. But, we really want to have a + * way to be able to verify the correctness of the described optimization on + * x86_64 on which testsuites are frequently exercised. + * + * So, this flag is introduced. Once it is set, verifier will randomize high + * 32-bit for those instructions who has been identified as safe to ignore them. + * Then, if verifier is not doing correct analysis, such randomization will + * regress tests to expose bugs. + */ +#define BPF_F_TEST_RND_HI32 (1U << 2) + /* When BPF ldimm64's insn[0].src_reg != 0 then this can have * two extensions: * diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index ad3ccf8..ec1b42c 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -1601,7 +1601,9 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr) if (CHECK_ATTR(BPF_PROG_LOAD)) return -EINVAL; - if (attr->prog_flags & ~(BPF_F_STRICT_ALIGNMENT | BPF_F_ANY_ALIGNMENT)) + if (attr->prog_flags & ~(BPF_F_STRICT_ALIGNMENT | + BPF_F_ANY_ALIGNMENT | + BPF_F_TEST_RND_HI32)) return -EINVAL; if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 22ccdf4..1bf32c3 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -263,6 +263,24 @@ enum bpf_attach_type { */ #define BPF_F_ANY_ALIGNMENT (1U << 1) +/* BPF_F_TEST_RND_HI32 is used in BPF_PROG_LOAD command for testing purpose. + * Verifier does sub-register def/use analysis and identifies instructions whose + * def only matters for low 32-bit, high 32-bit is never referenced later + * through implicit zero extension. Therefore verifier notifies JIT back-ends + * that it is safe to ignore clearing high 32-bit for these instructions. This + * saves some back-ends a lot of code-gen. However such optimization is not + * necessary on some arches, for example x86_64, arm64 etc, whose JIT back-ends + * hence hasn't used verifier's analysis result. But, we really want to have a + * way to be able to verify the correctness of the described optimization on + * x86_64 on which testsuites are frequently exercised. + * + * So, this flag is introduced. Once it is set, verifier will randomize high + * 32-bit for those instructions who has been identified as safe to ignore them. + * Then, if verifier is not doing correct analysis, such randomization will + * regress tests to expose bugs. + */ +#define BPF_F_TEST_RND_HI32 (1U << 2) + /* When BPF ldimm64's insn[0].src_reg != 0 then this can have * two extensions: * From patchwork Fri May 3 10:42:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1094807 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="gCmkMzLq"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44wTK4528gz9s55 for ; Fri, 3 May 2019 20:44:24 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726396AbfECKoY (ORCPT ); Fri, 3 May 2019 06:44:24 -0400 Received: from mail-wm1-f65.google.com ([209.85.128.65]:52353 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727491AbfECKnx (ORCPT ); Fri, 3 May 2019 06:43:53 -0400 Received: by mail-wm1-f65.google.com with SMTP id y26so1392892wma.2 for ; Fri, 03 May 2019 03:43:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=URszv7/t7QfVZUhW0Uggw2iCKFbU+bZsnFS6vq2iMu0=; b=gCmkMzLq+4d8OMDZcK3SuIIpZzR7oeVaJw1sj/oswdNSztjGHDJdTDhaLkww9nT4dL 9+WJL2XdIvGLmai/O07m6j8g3hXTwNsZMYRnBXXF1s8MJImuA8YwS1bt831ocV0hlN/U PUmFXJOkD7avuNBCmmhI8yrvjZ42ErTMLpPFSpGG29YmjKwBXtK/tqFkd4dsSP/m6bgN yxDu2mD+SHVPjbp0mZJacf/wRTHoQ7WK1GzLm0FnWppH83zHw6RQvVoGXRZeE1OcqhFW 3DaLc4AvbAy0EVGveDplOuOlcsd2ANlhBjNtz9UOnO6qz9pIZ02IN0eJnemhyd/LzZdH /Zyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=URszv7/t7QfVZUhW0Uggw2iCKFbU+bZsnFS6vq2iMu0=; b=RtAB+lc3SV90Bf5PSaCi8O6/jQGatfT7/KJ//iI3aTQLNkbY3SzRy8CUA/5klwp05n QQ/4Bjn+chvi2uCqS2xsgdBQKxEM0KhjA2D9zYpDE9gLbcggJ6qJX0pLlf3azDCxa1ac nadANQS93i/qfSNCe1hmP1UT/bQA6Eq0B0ajo9/tWnqZFHDaKRcMCNXbG7sVWkWQ5q8N YRc3TTqUDS/C14+tkDkuCBk92kjyaRo+u2Z/37veSczIyuW1cRiBCmLru2LTaWtjYB5v yDy4VSJIrNcd9qYGWqZDDxdsaviHIvkQxAxISzQm9PCoJW2NR71LFN/99AjNGprN1oJT fD8A== X-Gm-Message-State: APjAAAUWpCD0ZTY3fxoTFobBAEfcffwWI0pEGYsv4dYtK9uxpVNZi8Sv Xx0EIEyfQ7EqI+rHI2sADRd8Lg== X-Google-Smtp-Source: APXvYqx5/wFlr9efZAyheV4bxwliZnPxzHL3nwMdWviqU7NPlMuG8DD6eBF70unGFB1L8YoYasXbUg== X-Received: by 2002:a1c:f901:: with SMTP id x1mr5987805wmh.136.1556880230835; Fri, 03 May 2019 03:43:50 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id r29sm1716999wra.56.2019.05.03.03.43.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 May 2019 03:43:49 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v6 bpf-next 07/17] bpf: verifier: randomize high 32-bit when BPF_F_TEST_RND_HI32 is set Date: Fri, 3 May 2019 11:42:34 +0100 Message-Id: <1556880164-10689-8-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> References: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org This patch randomizes high 32-bit of a definition when BPF_F_TEST_RND_HI32 is set. It does this once the flag set no matter there is hardware zero extension support or not. Because this is a test feature and we want to deliver the most stressful test. Suggested-by: Alexei Starovoitov Signed-off-by: Jiong Wang --- kernel/bpf/verifier.c | 69 +++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 58 insertions(+), 11 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 999da02..31ffbef 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7648,32 +7648,79 @@ static int opt_remove_nops(struct bpf_verifier_env *env) return 0; } -static int opt_subreg_zext_lo32(struct bpf_verifier_env *env) +static int opt_subreg_zext_lo32_rnd_hi32(struct bpf_verifier_env *env, + const union bpf_attr *attr) { + struct bpf_insn *patch, zext_patch[2], rnd_hi32_patch[4]; struct bpf_insn_aux_data *aux = env->insn_aux_data; + int i, patch_len, delta = 0, len = env->prog->len; struct bpf_insn *insns = env->prog->insnsi; - int i, delta = 0, len = env->prog->len; - struct bpf_insn zext_patch[2]; struct bpf_prog *new_prog; + bool rnd_hi32; + + rnd_hi32 = attr->prog_flags & BPF_F_TEST_RND_HI32; zext_patch[1] = BPF_ALU32_IMM(BPF_ZEXT, 0, 0); + rnd_hi32_patch[1] = BPF_ALU64_IMM(BPF_MOV, BPF_REG_AX, 0); + rnd_hi32_patch[2] = BPF_ALU64_IMM(BPF_LSH, BPF_REG_AX, 32); + rnd_hi32_patch[3] = BPF_ALU64_REG(BPF_OR, 0, BPF_REG_AX); for (i = 0; i < len; i++) { int adj_idx = i + delta; struct bpf_insn insn; - if (!aux[adj_idx].zext_dst) + insn = insns[adj_idx]; + if (!aux[adj_idx].zext_dst) { + u8 code, class; + u32 imm_rnd; + + if (!rnd_hi32) + continue; + + code = insn.code; + class = BPF_CLASS(code); + if (insn_no_def(&insn)) + continue; + + /* NOTE: arg "reg" (the fourth one) is only used for + * BPF_STX which has been ruled out in above + * check, it is safe to pass NULL here. + */ + if (is_reg64(env, &insn, insn.dst_reg, NULL, DST_OP)) { + if (class == BPF_LD && + BPF_MODE(code) == BPF_IMM) + i++; + continue; + } + + /* ctx load could be transformed into wider load. */ + if (class == BPF_LDX && + aux[adj_idx].ptr_type == PTR_TO_CTX) + continue; + + imm_rnd = get_random_int(); + rnd_hi32_patch[0] = insn; + rnd_hi32_patch[1].imm = imm_rnd; + rnd_hi32_patch[3].dst_reg = insn.dst_reg; + patch = rnd_hi32_patch; + patch_len = 4; + goto apply_patch_buffer; + } + + if (bpf_jit_hardware_zext()) continue; - insn = insns[adj_idx]; zext_patch[0] = insn; zext_patch[1].dst_reg = insn.dst_reg; - new_prog = bpf_patch_insn_data(env, adj_idx, zext_patch, 2); + patch = zext_patch; + patch_len = 2; +apply_patch_buffer: + new_prog = bpf_patch_insn_data(env, adj_idx, patch, patch_len); if (!new_prog) return -ENOMEM; env->prog = new_prog; insns = new_prog->insnsi; aux = env->insn_aux_data; - delta += 2; + delta += patch_len - 1; } return 0; @@ -8533,10 +8580,10 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, /* do 32-bit optimization after insn patching has done so those patched * insns could be handled correctly. */ - if (ret == 0 && !bpf_jit_hardware_zext() && - !bpf_prog_is_dev_bound(env->prog->aux)) { - ret = opt_subreg_zext_lo32(env); - env->prog->aux->verifier_zext = !ret; + if (ret == 0 && !bpf_prog_is_dev_bound(env->prog->aux)) { + ret = opt_subreg_zext_lo32_rnd_hi32(env, attr); + env->prog->aux->verifier_zext = + bpf_jit_hardware_zext() ? false : !ret; } if (ret == 0) From patchwork Fri May 3 10:42:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1094803 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="oJ/kkam9"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44wTK045hQz9s9G for ; Fri, 3 May 2019 20:44:20 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727602AbfECKn6 (ORCPT ); Fri, 3 May 2019 06:43:58 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:38043 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727501AbfECKnx (ORCPT ); Fri, 3 May 2019 06:43:53 -0400 Received: by mail-wr1-f66.google.com with SMTP id k16so7263241wrn.5 for ; Fri, 03 May 2019 03:43:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=NA7Zmed3nLQuw1MNsM252lDKOYqIIWGI4OywzuFN9TQ=; b=oJ/kkam9bs/yjmdpEULbSM7b6BZr/9Hn+TTNPfQI0x9t+CDGrQg5E5Q1sqkeh/LxyU kjmaf2a2jBXqpEw2OjVDj8HF5Dx3l6WKWRi6OMtA41ZvcjmFeAakWYCmvSTDvLRUxcuj df18dF45bRV2zz5kRjDln06UuzUxGcwiOb9M10dMVH0WMisVV+J5AEJEbXLGK9BNxXKZ +f3LZl50bc9wLLydrRGb536/YBfsdNNWU5Y04iiwW04O13pdA4gFvz7+tEk7NdT+okRY +xPGmFeR+OOGgFhkxNHDf0cNXmzSPMoIpjQh6eGBfhj94NWRmRzaVJanPdF7OmykSTwL 53hQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=NA7Zmed3nLQuw1MNsM252lDKOYqIIWGI4OywzuFN9TQ=; b=SxhcvQb5p9elKi17utj6mw6Wckc8wNux7t10KuxYeuwrPRFh5nAyGxt2C7F6Vq0mmN OUOwHbDqLaUtwpScn8E7ica7HQ35NHPEFYjy52gMXBHrN1mjdiTyu/jpiJ084bbYu+fa pqN4f1/J/R1GunftpeXowr0Mj2aerPiJz2n6kg9ioOtFjOrsbh1dGESgJfLHR0/G8yMm gedR6XIC8zJPna+bRddqaqBtJcaCpS3CVLrISiXNvr+9d75cDXuNrT4FvcRmxFBNrKdl EaKsseJH4kCyfMe1N3YueidyTM4ioARwFCoDWrtHEtbXjW8AW42A8hlIpha/8RtJryho rR3A== X-Gm-Message-State: APjAAAVd1mngL4FMTawewh2jt5og0opJgVrJHJ/jzVkcxOYKBrubGeZ7 oj4ic1/NOSMT7ym7XdVnRHJ14g== X-Google-Smtp-Source: APXvYqyJEKh1xPoG/mg/1jP4cw49utwLfxPf0zIQY1DbGyrmgUS5pxRAbbGjqdEl0KTaz9MKBc80Rw== X-Received: by 2002:a5d:54c7:: with SMTP id x7mr315384wrv.253.1556880231739; Fri, 03 May 2019 03:43:51 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id r29sm1716999wra.56.2019.05.03.03.43.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 May 2019 03:43:51 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v6 bpf-next 08/17] libbpf: add "prog_flags" to bpf_program/bpf_prog_load_attr/bpf_load_program_attr Date: Fri, 3 May 2019 11:42:35 +0100 Message-Id: <1556880164-10689-9-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> References: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org libbpf doesn't allow passing "prog_flags" during bpf program load in a couple of load related APIs, "bpf_load_program_xattr", "load_program" and "bpf_prog_load_xattr". It makes sense to allow passing "prog_flags" which is useful for customizing program loading. Reviewed-by: Jakub Kicinski Signed-off-by: Jiong Wang --- tools/lib/bpf/bpf.c | 1 + tools/lib/bpf/bpf.h | 1 + tools/lib/bpf/libbpf.c | 3 +++ tools/lib/bpf/libbpf.h | 1 + 4 files changed, 6 insertions(+) diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c index 955191c..f79ec49 100644 --- a/tools/lib/bpf/bpf.c +++ b/tools/lib/bpf/bpf.c @@ -254,6 +254,7 @@ int bpf_load_program_xattr(const struct bpf_load_program_attr *load_attr, if (load_attr->name) memcpy(attr.prog_name, load_attr->name, min(strlen(load_attr->name), BPF_OBJ_NAME_LEN - 1)); + attr.prog_flags = load_attr->prog_flags; fd = sys_bpf_prog_load(&attr, sizeof(attr)); if (fd >= 0) diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h index 9593fec..ff42ca0 100644 --- a/tools/lib/bpf/bpf.h +++ b/tools/lib/bpf/bpf.h @@ -87,6 +87,7 @@ struct bpf_load_program_attr { const void *line_info; __u32 line_info_cnt; __u32 log_level; + __u32 prog_flags; }; /* Flags to direct loading requirements */ diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index 11a65db..debca21 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -184,6 +184,7 @@ struct bpf_program { void *line_info; __u32 line_info_rec_size; __u32 line_info_cnt; + __u32 prog_flags; }; enum libbpf_map_type { @@ -1949,6 +1950,7 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt, load_attr.line_info_rec_size = prog->line_info_rec_size; load_attr.line_info_cnt = prog->line_info_cnt; load_attr.log_level = prog->log_level; + load_attr.prog_flags = prog->prog_flags; if (!load_attr.insns || !load_attr.insns_cnt) return -EINVAL; @@ -3394,6 +3396,7 @@ int bpf_prog_load_xattr(const struct bpf_prog_load_attr *attr, expected_attach_type); prog->log_level = attr->log_level; + prog->prog_flags = attr->prog_flags; if (!first_prog) first_prog = prog; } diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index c5ff005..5abc237 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -320,6 +320,7 @@ struct bpf_prog_load_attr { enum bpf_attach_type expected_attach_type; int ifindex; int log_level; + int prog_flags; }; LIBBPF_API int bpf_prog_load_xattr(const struct bpf_prog_load_attr *attr, From patchwork Fri May 3 10:42:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1094790 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="sItYbVo2"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44wTJb3K3pz9s9y for ; Fri, 3 May 2019 20:43:59 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727555AbfECKn6 (ORCPT ); Fri, 3 May 2019 06:43:58 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:35724 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727513AbfECKnz (ORCPT ); Fri, 3 May 2019 06:43:55 -0400 Received: by mail-wr1-f65.google.com with SMTP id h15so1552399wrb.2 for ; Fri, 03 May 2019 03:43:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=5YnD+NSqVK6fhGQF6L29e9Q4qjgxcPpyRRt0fRxS8v8=; b=sItYbVo2wUf93MGkj2jUMdumjs2f7NgU0UDfWHoN0pQy1cNl1yt8W+3dUdYjRvoHWw kaNupo7Pn6ShbEaXoZQ1KaqvWaRuoA4PxG+aM3g6mVls2ktUOF7pwSJMn46tc5QZaKjY 6Xg9x5U2a48lGHTvMwMAERY+xedDTZ9Qda5scP/dczqZsAbDJPazCAUq6iFYBfOBylAW ifsFNAajwzsZREyZMHFhyu83WYE1Y0/18LIO47Likus/I3W0zQ0KSQSuZ8aMWrpcnRgV 4XHyWv14YDrIp17BAmlvFki64l2p6Q0TRToiFS3bc8A1DHqhE/Zj4iwU4l9cyeQ1omHA ekeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5YnD+NSqVK6fhGQF6L29e9Q4qjgxcPpyRRt0fRxS8v8=; b=poLYRXTBbg1Xi+OZTomGcfkCzctS8Lqvyb43AP+9ZxwMA7/oK4+tc/QVpG3WUiK0om W6SH7VsneHw0ln9nzKrzxZEui6iwHBsNNq+YYQ9GuuJaR7vtb6NL3QIZaL95f18lp5CG 7UzuRQ+jUfsMLPikTSPxQ9Ly01rWObEKVhFHuv7fW3g2U4zdt0M5QjQL8/n4agY6/CC/ wT+JAX8fFTU9yabDJq56KKMnlWh9HT7/9tpJpDZt/lHKfhZvmWQRGacB1QB7wObiXNZN 4zqAf2lZoQZ72Vi8osC66mhRDC5h2VDtG58GwoU5QKJaooXpKw2ee/Q76IVOnLpqRkIp vZyw== X-Gm-Message-State: APjAAAW6bEQCzzzfiZ0U5GmlQAsqCei5jZ3IFtWU9w6g0LJxYCkX1pKm PR34+jaSKBDnj1ycVs/YzzX7jA== X-Google-Smtp-Source: APXvYqxRAUrqcleRUAnNOFOiK7NDzXQ48SFO+sweMddT50PDFvk3/xNJmREVQqPRD5o2bA7jk9FdhA== X-Received: by 2002:a05:6000:45:: with SMTP id k5mr6632020wrx.261.1556880233071; Fri, 03 May 2019 03:43:53 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id r29sm1716999wra.56.2019.05.03.03.43.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 May 2019 03:43:52 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v6 bpf-next 09/17] selftests: bpf: adjust several test_verifier helpers for insn insertion Date: Fri, 3 May 2019 11:42:36 +0100 Message-Id: <1556880164-10689-10-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> References: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org - bpf_fill_ld_abs_vlan_push_pop: Prevent zext happens inside PUSH_CNT loop. This could happen because of BPF_LD_ABS (32-bit def) + BPF_JMP (64-bit use), or BPF_LD_ABS + EXIT (64-bit use of R0). So, change BPF_JMP to BPF_JMP32 and redefine R0 at exit path to cut off the data-flow from inside the loop. - bpf_fill_jump_around_ld_abs: Jump range is limited to 16 bit. every ld_abs is replaced by 6 insns, but on arches like arm, ppc etc, there will be one BPF_ZEXT inserted to extend the error value of the inlined ld_abs sequence which then contains 7 insns. so, set the dividend to 7 so the testcase could work on all arches. - bpf_fill_scale1/bpf_fill_scale2: Both contains ~1M BPF_ALU32_IMM which will trigger ~1M insn patcher call because of hi32 randomization later when BPF_F_TEST_RND_HI32 is set for bpf selftests. Insn patcher is not efficient that 1M call to it will hang computer. So , change to BPF_ALU64_IMM to avoid hi32 randomization. Signed-off-by: Jiong Wang --- tools/testing/selftests/bpf/test_verifier.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-) diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c index ccd896b..3dcdfd4 100644 --- a/tools/testing/selftests/bpf/test_verifier.c +++ b/tools/testing/selftests/bpf/test_verifier.c @@ -138,32 +138,36 @@ static void bpf_fill_ld_abs_vlan_push_pop(struct bpf_test *self) loop: for (j = 0; j < PUSH_CNT; j++) { insn[i++] = BPF_LD_ABS(BPF_B, 0); - insn[i] = BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0x34, len - i - 2); + /* jump to error label */ + insn[i] = BPF_JMP32_IMM(BPF_JNE, BPF_REG_0, 0x34, len - i - 3); i++; insn[i++] = BPF_MOV64_REG(BPF_REG_1, BPF_REG_6); insn[i++] = BPF_MOV64_IMM(BPF_REG_2, 1); insn[i++] = BPF_MOV64_IMM(BPF_REG_3, 2); insn[i++] = BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_vlan_push), - insn[i] = BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, len - i - 2); + insn[i] = BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, len - i - 3); i++; } for (j = 0; j < PUSH_CNT; j++) { insn[i++] = BPF_LD_ABS(BPF_B, 0); - insn[i] = BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0x34, len - i - 2); + insn[i] = BPF_JMP32_IMM(BPF_JNE, BPF_REG_0, 0x34, len - i - 3); i++; insn[i++] = BPF_MOV64_REG(BPF_REG_1, BPF_REG_6); insn[i++] = BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_vlan_pop), - insn[i] = BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, len - i - 2); + insn[i] = BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, len - i - 3); i++; } if (++k < 5) goto loop; - for (; i < len - 1; i++) - insn[i] = BPF_ALU32_IMM(BPF_MOV, BPF_REG_0, 0xbef); + for (; i < len - 3; i++) + insn[i] = BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0xbef); + insn[len - 3] = BPF_JMP_A(1); + /* error label */ + insn[len - 2] = BPF_MOV32_IMM(BPF_REG_0, 0); insn[len - 1] = BPF_EXIT_INSN(); self->prog_len = len; } @@ -171,8 +175,13 @@ static void bpf_fill_ld_abs_vlan_push_pop(struct bpf_test *self) static void bpf_fill_jump_around_ld_abs(struct bpf_test *self) { struct bpf_insn *insn = self->fill_insns; - /* jump range is limited to 16 bit. every ld_abs is replaced by 6 insns */ - unsigned int len = (1 << 15) / 6; + /* jump range is limited to 16 bit. every ld_abs is replaced by 6 insns, + * but on arches like arm, ppc etc, there will be one BPF_ZEXT inserted + * to extend the error value of the inlined ld_abs sequence which then + * contains 7 insns. so, set the dividend to 7 so the testcase could + * work on all arches. + */ + unsigned int len = (1 << 15) / 7; int i = 0; insn[i++] = BPF_MOV64_REG(BPF_REG_6, BPF_REG_1); @@ -230,7 +239,7 @@ static void bpf_fill_scale1(struct bpf_test *self) * within 1m limit add MAX_TEST_INSNS - 1025 MOVs and 1 EXIT */ while (i < MAX_TEST_INSNS - 1025) - insn[i++] = BPF_ALU32_IMM(BPF_MOV, BPF_REG_0, 42); + insn[i++] = BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 42); insn[i] = BPF_EXIT_INSN(); self->prog_len = i + 1; self->retval = 42; @@ -261,7 +270,7 @@ static void bpf_fill_scale2(struct bpf_test *self) * within 1m limit add MAX_TEST_INSNS - 1025 MOVs and 1 EXIT */ while (i < MAX_TEST_INSNS - 1025) - insn[i++] = BPF_ALU32_IMM(BPF_MOV, BPF_REG_0, 42); + insn[i++] = BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 42); insn[i] = BPF_EXIT_INSN(); self->prog_len = i + 1; self->retval = 42; From patchwork Fri May 3 10:42:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1094789 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="NPYtpPq9"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44wTJb0GwGz9s9G for ; Fri, 3 May 2019 20:43:59 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727591AbfECKn6 (ORCPT ); Fri, 3 May 2019 06:43:58 -0400 Received: from mail-wm1-f65.google.com ([209.85.128.65]:37592 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727528AbfECKn5 (ORCPT ); Fri, 3 May 2019 06:43:57 -0400 Received: by mail-wm1-f65.google.com with SMTP id y5so6186404wma.2 for ; Fri, 03 May 2019 03:43:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=tslFcrcgD8lQiIU1dFiAcRKr0h3FWLPCM/MfQ/nJ1MQ=; b=NPYtpPq9nLowZkkSRBYSO8LskFQLhaVXyDNGrHIVnrwT0ummDVhDcbIisBH5X4n4HD 0/212nHqupq5eOYp7FBadV0csikqF1UKeB07Bupc1n6TlFgXTXT5EOUP47qPerPGXJCE UgWjB95yw0elZEebWV0n6iutjh5xLABBwncKCPwm94mIeZE8UX5PoRIP+eJ23Dve51e/ /2L43KpQOh9JsGiQMztt1Ei7uCxkU0Uj+bzLlgS98EKCQd6FIhnuIb88fi+VizBlY6wr SQ3AeZnu2qRDHB4nydbq6QDj9IMJCTdB0O7BPF0Ds6zMEW9Kq4B9p6aVAsVQsapRR2RE BDkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tslFcrcgD8lQiIU1dFiAcRKr0h3FWLPCM/MfQ/nJ1MQ=; b=kzrGEjqNQhbzwBZyFAy4lkTKWM9NIh7gYWkidtmY7LppytZqZgT0j8EECECwXEQHNr zd2tYT0fsNp+SqKcmSioL7xWv+5vKFuWDy+xHn2KTeGwbBCnmo9T/BMXK2yXC5mwEPHf AscoFI87c52sDGbVCLljiLXwt7QpQPeoJx6qOBwaItzrY/U3GS8W6sfAtyQBaoOBBMcZ hrAz+1zYnMMVnW1AwT6082g1GUFxZVbt8ld86xkPfg/ViUpfn373F+pNMa2cYWCz+8NQ b1PmdYxWB/aLq9G1s2QSLRPcabhgpWRBVemi6o1u2pwdVHI6CBq+gpP5kvrnEO5Cq0M+ kOcQ== X-Gm-Message-State: APjAAAX5beaBpKtAI/Y17gogJFeZNcQp6ekFL9L5jh8LovxjMaBlN8PM uD4gD32pwSyrBb7aR6/TwtgG8A== X-Google-Smtp-Source: APXvYqyx6iDNhr95yaNSS9cH99QQ7UWh1JYFWWSxWWHFurti35wwfhpLHCh6kCtwG4GA7s71L4ot9w== X-Received: by 2002:a1c:f70e:: with SMTP id v14mr6029988wmh.74.1556880233965; Fri, 03 May 2019 03:43:53 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id r29sm1716999wra.56.2019.05.03.03.43.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 May 2019 03:43:53 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v6 bpf-next 10/17] selftests: bpf: enable hi32 randomization for all tests Date: Fri, 3 May 2019 11:42:37 +0100 Message-Id: <1556880164-10689-11-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> References: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The previous libbpf patch allows user to specify "prog_flags" to bpf program load APIs. To enable high 32-bit randomization for a test, we need to set BPF_F_TEST_RND_HI32 in "prog_flags". To enable such randomization for all tests, we need to make sure all places are passing BPF_F_TEST_RND_HI32. Changing them one by one is not convenient, also, it would be better if a test could be switched to "normal" running mode without code change. Given the program load APIs used across bpf selftests are mostly: bpf_prog_load: load from file bpf_load_program: load from raw insns A test_stub.c is implemented for bpf seltests, it offers two functions for testing purpose: bpf_prog_test_load bpf_test_load_program The are the same as "bpf_prog_load" and "bpf_load_program", except they also set BPF_F_TEST_RND_HI32. Given *_xattr functions are the APIs to customize any "prog_flags", it makes little sense to put these two functions into libbpf. Then, the following CFLAGS are passed to compilations for host programs: -Dbpf_prog_load=bpf_prog_test_load -Dbpf_load_program=bpf_test_load_program They migrate the used load APIs to the test version, hence enable high 32-bit randomization for these tests without changing source code. Besides all these, there are several testcases are using "bpf_prog_load_attr" directly, their call sites are updated to pass BPF_F_TEST_RND_HI32. Signed-off-by: Jiong Wang --- tools/testing/selftests/bpf/Makefile | 10 +++--- .../selftests/bpf/prog_tests/bpf_verif_scale.c | 1 + tools/testing/selftests/bpf/test_sock_addr.c | 1 + tools/testing/selftests/bpf/test_sock_fields.c | 1 + tools/testing/selftests/bpf/test_socket_cookie.c | 1 + tools/testing/selftests/bpf/test_stub.c | 40 ++++++++++++++++++++++ tools/testing/selftests/bpf/test_verifier.c | 2 +- 7 files changed, 51 insertions(+), 5 deletions(-) create mode 100644 tools/testing/selftests/bpf/test_stub.c diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index 66f2dca..3f2c131 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -15,7 +15,9 @@ LLC ?= llc LLVM_OBJCOPY ?= llvm-objcopy LLVM_READELF ?= llvm-readelf BTF_PAHOLE ?= pahole -CFLAGS += -Wall -O2 -I$(APIDIR) -I$(LIBDIR) -I$(BPFDIR) -I$(GENDIR) $(GENFLAGS) -I../../../include +CFLAGS += -Wall -O2 -I$(APIDIR) -I$(LIBDIR) -I$(BPFDIR) -I$(GENDIR) $(GENFLAGS) -I../../../include \ + -Dbpf_prog_load=bpf_prog_test_load \ + -Dbpf_load_program=bpf_test_load_program LDLIBS += -lcap -lelf -lrt -lpthread # Order correspond to 'make run_tests' order @@ -78,9 +80,9 @@ $(OUTPUT)/test_maps: map_tests/*.c BPFOBJ := $(OUTPUT)/libbpf.a -$(TEST_GEN_PROGS): $(BPFOBJ) +$(TEST_GEN_PROGS): test_stub.o $(BPFOBJ) -$(TEST_GEN_PROGS_EXTENDED): $(OUTPUT)/libbpf.a +$(TEST_GEN_PROGS_EXTENDED): test_stub.o $(OUTPUT)/libbpf.a $(OUTPUT)/test_dev_cgroup: cgroup_helpers.c $(OUTPUT)/test_skb_cgroup_id_user: cgroup_helpers.c @@ -176,7 +178,7 @@ $(ALU32_BUILD_DIR)/test_progs_32: test_progs.c $(OUTPUT)/libbpf.a\ $(ALU32_BUILD_DIR)/urandom_read $(CC) $(TEST_PROGS_CFLAGS) $(CFLAGS) \ -o $(ALU32_BUILD_DIR)/test_progs_32 \ - test_progs.c trace_helpers.c prog_tests/*.c \ + test_progs.c test_stub.c trace_helpers.c prog_tests/*.c \ $(OUTPUT)/libbpf.a $(LDLIBS) $(ALU32_BUILD_DIR)/test_progs_32: $(PROG_TESTS_H) diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_verif_scale.c b/tools/testing/selftests/bpf/prog_tests/bpf_verif_scale.c index 23b159d..2623d15 100644 --- a/tools/testing/selftests/bpf/prog_tests/bpf_verif_scale.c +++ b/tools/testing/selftests/bpf/prog_tests/bpf_verif_scale.c @@ -22,6 +22,7 @@ static int check_load(const char *file) attr.file = file; attr.prog_type = BPF_PROG_TYPE_SCHED_CLS; attr.log_level = 4; + attr.prog_flags = BPF_F_TEST_RND_HI32; err = bpf_prog_load_xattr(&attr, &obj, &prog_fd); bpf_object__close(obj); if (err) diff --git a/tools/testing/selftests/bpf/test_sock_addr.c b/tools/testing/selftests/bpf/test_sock_addr.c index 3f110ea..5d0c4f0 100644 --- a/tools/testing/selftests/bpf/test_sock_addr.c +++ b/tools/testing/selftests/bpf/test_sock_addr.c @@ -745,6 +745,7 @@ static int load_path(const struct sock_addr_test *test, const char *path) attr.file = path; attr.prog_type = BPF_PROG_TYPE_CGROUP_SOCK_ADDR; attr.expected_attach_type = test->expected_attach_type; + attr.prog_flags = BPF_F_TEST_RND_HI32; if (bpf_prog_load_xattr(&attr, &obj, &prog_fd)) { if (test->expected_result != LOAD_REJECT) diff --git a/tools/testing/selftests/bpf/test_sock_fields.c b/tools/testing/selftests/bpf/test_sock_fields.c index e089477..f0fc103 100644 --- a/tools/testing/selftests/bpf/test_sock_fields.c +++ b/tools/testing/selftests/bpf/test_sock_fields.c @@ -414,6 +414,7 @@ int main(int argc, char **argv) struct bpf_prog_load_attr attr = { .file = "test_sock_fields_kern.o", .prog_type = BPF_PROG_TYPE_CGROUP_SKB, + .prog_flags = BPF_F_TEST_RND_HI32, }; int cgroup_fd, egress_fd, ingress_fd, err; struct bpf_program *ingress_prog; diff --git a/tools/testing/selftests/bpf/test_socket_cookie.c b/tools/testing/selftests/bpf/test_socket_cookie.c index e51d637..cac8ee5 100644 --- a/tools/testing/selftests/bpf/test_socket_cookie.c +++ b/tools/testing/selftests/bpf/test_socket_cookie.c @@ -148,6 +148,7 @@ static int run_test(int cgfd) memset(&attr, 0, sizeof(attr)); attr.file = SOCKET_COOKIE_PROG; attr.prog_type = BPF_PROG_TYPE_UNSPEC; + attr.prog_flags = BPF_F_TEST_RND_HI32; err = bpf_prog_load_xattr(&attr, &pobj, &prog_fd); if (err) { diff --git a/tools/testing/selftests/bpf/test_stub.c b/tools/testing/selftests/bpf/test_stub.c new file mode 100644 index 0000000..84e81a8 --- /dev/null +++ b/tools/testing/selftests/bpf/test_stub.c @@ -0,0 +1,40 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* Copyright (C) 2019 Netronome Systems, Inc. */ + +#include +#include +#include + +int bpf_prog_test_load(const char *file, enum bpf_prog_type type, + struct bpf_object **pobj, int *prog_fd) +{ + struct bpf_prog_load_attr attr; + + memset(&attr, 0, sizeof(struct bpf_prog_load_attr)); + attr.file = file; + attr.prog_type = type; + attr.expected_attach_type = 0; + attr.prog_flags = BPF_F_TEST_RND_HI32; + + return bpf_prog_load_xattr(&attr, pobj, prog_fd); +} + +int bpf_test_load_program(enum bpf_prog_type type, const struct bpf_insn *insns, + size_t insns_cnt, const char *license, + __u32 kern_version, char *log_buf, + size_t log_buf_sz) +{ + struct bpf_load_program_attr load_attr; + + memset(&load_attr, 0, sizeof(struct bpf_load_program_attr)); + load_attr.prog_type = type; + load_attr.expected_attach_type = 0; + load_attr.name = NULL; + load_attr.insns = insns; + load_attr.insns_cnt = insns_cnt; + load_attr.license = license; + load_attr.kern_version = kern_version; + load_attr.prog_flags = BPF_F_TEST_RND_HI32; + + return bpf_load_program_xattr(&load_attr, log_buf, log_buf_sz); +} diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c index 3dcdfd4..71704de 100644 --- a/tools/testing/selftests/bpf/test_verifier.c +++ b/tools/testing/selftests/bpf/test_verifier.c @@ -879,7 +879,7 @@ static void do_test_single(struct bpf_test *test, bool unpriv, if (fixup_skips != skips) return; - pflags = 0; + pflags = BPF_F_TEST_RND_HI32; if (test->flags & F_LOAD_WITH_STRICT_ALIGNMENT) pflags |= BPF_F_STRICT_ALIGNMENT; if (test->flags & F_NEEDS_EFFICIENT_UNALIGNED_ACCESS) From patchwork Fri May 3 10:42:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1094792 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="Xu/mKHeY"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44wTJc4KSFz9s9G for ; Fri, 3 May 2019 20:44:00 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727144AbfECKn7 (ORCPT ); Fri, 3 May 2019 06:43:59 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:35732 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727546AbfECKn4 (ORCPT ); Fri, 3 May 2019 06:43:56 -0400 Received: by mail-wr1-f68.google.com with SMTP id h15so1552506wrb.2 for ; Fri, 03 May 2019 03:43:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=mT1oHrlzqJHZYZWe7z3ygTdI3TkFROq27Kd1TlYleYw=; b=Xu/mKHeY2SalJuj/CJXKbIpcoMrtNoU3BWxTESigtgDs+/3ypGo/R1ednRUlHBnSI/ XbC27h9t9o5pTyP2iaJWbeMsJdmqPxaHqbhb6qzBOZ+2+vwAL1XrBbSj2xEgaCGH7QAB LeOVNffThGOlondG8Bosrjr75YxAtNez17cKfFw9rI5497/ZJJotuHIl/RnBYOeSYZ1Q tMl9uhcVCvDEYMexQWkcxps2lKroUb56zU0LMEaN9KYfgQqUKRMxqpdkzrlXv9loDEhs 70uiK3QDwodzdv/54rH0FoQGAlxEEMnzGkJBNn+mhvARiIYp4oUkkRB+1t8EMlGXtvjd YV1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=mT1oHrlzqJHZYZWe7z3ygTdI3TkFROq27Kd1TlYleYw=; b=ByMtOh2K3PnTwtk1fKN3pkRvJOopU9gWW5Az48anXEhZN1muUyivEo8f8Qj8YG96SA NQqd6nOZuU0kvycrhMPPH1YotdeaHO8KswYiE9Yhsu8ZqB5zGJ1LAyCzjhhK2pBDsi7W EnOzEQmBWusV/WT4Bmtnqja3AfBUunGQFeJ2FyU4w03AowMRp+e4H3wMD5F9pSOamzLx DuU5vDfhVvriO7YJxPCdCzj57N242exgEiEG6WD51N4Aq9pRtBe3puZbWHOObofpPqP2 dVBgle0L2xoT5ovGGG3Rv2gam5VCcpKMCxk8kQjE14Lbyn2Xbwvv1jYblNkkKKABbqOg YeBg== X-Gm-Message-State: APjAAAWLwL0GdVCXITMHJdfmo0DinZf4qunjvguOqngYMgi7+NdbU/HD qq54p2OSAH4mvR1hD5Rve32EeQ== X-Google-Smtp-Source: APXvYqza0zCmjCkdvz0N2XG/mgvOsWoA8+k9/mYxMJzo8uOVEliiqUhhAQvsdeU6X974UrzLbkOV5Q== X-Received: by 2002:a5d:4942:: with SMTP id r2mr6158071wrs.159.1556880235036; Fri, 03 May 2019 03:43:55 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id r29sm1716999wra.56.2019.05.03.03.43.54 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 May 2019 03:43:54 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang , Shubham Bansal Subject: [PATCH v6 bpf-next 11/17] arm: bpf: eliminate zero extension code-gen Date: Fri, 3 May 2019 11:42:38 +0100 Message-Id: <1556880164-10689-12-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> References: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Cc: Shubham Bansal Signed-off-by: Jiong Wang --- arch/arm/net/bpf_jit_32.c | 35 ++++++++++++++++++++++++++--------- 1 file changed, 26 insertions(+), 9 deletions(-) diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c index c8bfbbf..a6f78c8 100644 --- a/arch/arm/net/bpf_jit_32.c +++ b/arch/arm/net/bpf_jit_32.c @@ -736,7 +736,8 @@ static inline void emit_a32_alu_r64(const bool is64, const s8 dst[], /* ALU operation */ emit_alu_r(rd[1], rs, true, false, op, ctx); - emit_a32_mov_i(rd[0], 0, ctx); + if (!ctx->prog->aux->verifier_zext) + emit_a32_mov_i(rd[0], 0, ctx); } arm_bpf_put_reg64(dst, rd, ctx); @@ -758,8 +759,9 @@ static inline void emit_a32_mov_r64(const bool is64, const s8 dst[], struct jit_ctx *ctx) { if (!is64) { emit_a32_mov_r(dst_lo, src_lo, ctx); - /* Zero out high 4 bytes */ - emit_a32_mov_i(dst_hi, 0, ctx); + if (!ctx->prog->aux->verifier_zext) + /* Zero out high 4 bytes */ + emit_a32_mov_i(dst_hi, 0, ctx); } else if (__LINUX_ARM_ARCH__ < 6 && ctx->cpu_architecture < CPU_ARCH_ARMv5TE) { /* complete 8 byte move */ @@ -1060,17 +1062,20 @@ static inline void emit_ldx_r(const s8 dst[], const s8 src, case BPF_B: /* Load a Byte */ emit(ARM_LDRB_I(rd[1], rm, off), ctx); - emit_a32_mov_i(rd[0], 0, ctx); + if (!ctx->prog->aux->verifier_zext) + emit_a32_mov_i(rd[0], 0, ctx); break; case BPF_H: /* Load a HalfWord */ emit(ARM_LDRH_I(rd[1], rm, off), ctx); - emit_a32_mov_i(rd[0], 0, ctx); + if (!ctx->prog->aux->verifier_zext) + emit_a32_mov_i(rd[0], 0, ctx); break; case BPF_W: /* Load a Word */ emit(ARM_LDR_I(rd[1], rm, off), ctx); - emit_a32_mov_i(rd[0], 0, ctx); + if (!ctx->prog->aux->verifier_zext) + emit_a32_mov_i(rd[0], 0, ctx); break; case BPF_DW: /* Load a Double Word */ @@ -1352,6 +1357,10 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) switch (code) { /* ALU operations */ + /* dst = (u32) dst */ + case BPF_ALU | BPF_ZEXT: + emit_a32_mov_i(dst_hi, 0, ctx); + break; /* dst = src */ case BPF_ALU | BPF_MOV | BPF_K: case BPF_ALU | BPF_MOV | BPF_X: @@ -1438,7 +1447,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) } emit_udivmod(rd_lo, rd_lo, rt, ctx, BPF_OP(code)); arm_bpf_put_reg32(dst_lo, rd_lo, ctx); - emit_a32_mov_i(dst_hi, 0, ctx); + if (!ctx->prog->aux->verifier_zext) + emit_a32_mov_i(dst_hi, 0, ctx); break; case BPF_ALU64 | BPF_DIV | BPF_K: case BPF_ALU64 | BPF_DIV | BPF_X: @@ -1453,7 +1463,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) return -EINVAL; if (imm) emit_a32_alu_i(dst_lo, imm, ctx, BPF_OP(code)); - emit_a32_mov_i(dst_hi, 0, ctx); + if (!ctx->prog->aux->verifier_zext) + emit_a32_mov_i(dst_hi, 0, ctx); break; /* dst = dst << imm */ case BPF_ALU64 | BPF_LSH | BPF_K: @@ -1488,7 +1499,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) /* dst = ~dst */ case BPF_ALU | BPF_NEG: emit_a32_alu_i(dst_lo, 0, ctx, BPF_OP(code)); - emit_a32_mov_i(dst_hi, 0, ctx); + if (!ctx->prog->aux->verifier_zext) + emit_a32_mov_i(dst_hi, 0, ctx); break; /* dst = ~dst (64 bit) */ case BPF_ALU64 | BPF_NEG: @@ -1838,6 +1850,11 @@ void bpf_jit_compile(struct bpf_prog *prog) /* Nothing to do here. We support Internal BPF. */ } +bool bpf_jit_hardware_zext(void) +{ + return false; +} + struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) { struct bpf_prog *tmp, *orig_prog = prog; From patchwork Fri May 3 10:42:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1094788 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="fRdx0czf"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44wTJZ3wmYz9s6w for ; Fri, 3 May 2019 20:43:58 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727558AbfECKn6 (ORCPT ); Fri, 3 May 2019 06:43:58 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:39272 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727554AbfECKn5 (ORCPT ); Fri, 3 May 2019 06:43:57 -0400 Received: by mail-wr1-f67.google.com with SMTP id a9so7255335wrp.6 for ; Fri, 03 May 2019 03:43:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=HU+XJURX+8gtKclVt6Ec6skvPc1Pkhc0cIYSbM/6vbA=; b=fRdx0czfVcJxvB2gIvOCStJVxS67PoTg91xuQlYyBckvBX18vpyGBwBZ3Q2Actzft9 ChopHPQyueqV/t4hNsQk/6+rRHngHXxTl5xTxj8LvXAJx2fWZHy4+lOLIRFWNKdoVuE+ +2TfnRK+TBJFU38KEsHsBVdbNWQBAko3MAHyvtHXNxqoqOMxyiYw6vuCIcetSQFsmR+b bP9sJ64UbNcLjsqcsvGo4DjLQuze6X29kVFs36eEl3TlDWN8QSbOmwuCIUt/FDrD9hVV 7hTC7ndWNvtticzqMgJYRb/HGA7glFeLtrHD6wBaKO+0IMUoHxrmdR53k/HN0YCximWm 176A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=HU+XJURX+8gtKclVt6Ec6skvPc1Pkhc0cIYSbM/6vbA=; b=t/OE8sYlyNqCa8AJvhBsAuRJz7omQ0rxGfQ9sT0FXMIKHGiPEoREpelieMs2EZFzYD Kx5ENSyi1RjsYXn7QSuSOHrlO4+aNFLLnmiZTvjUTmAEZwHr0WXIa2xmSQvOEHbq9c6s vYTOpqKaUyW0WP25N6WdVbv2aUx6jdvk1Zj8neLbXFGroVqbL402sEZUGTsUSAq5r3kZ 6J3C2K183GzNr9u9RrcE86/wWjdPZcpBgxuMJnyfzF803woBzhEEtmK3YnrIC2EBVEGB Vitlp0wRw1jb3lWKlaSWTl9VwFxtERnQBDayRRqC7gRo7n7ZZ0UIX+IULWE0a7jMmSKp nozg== X-Gm-Message-State: APjAAAUBoIGKAEpjuPJRuXROXvosdB7LoXlgZe9aFlyS1u72z+RU8mHu FIeiqCcKdYzG5DJ/ecg6/WnsRA== X-Google-Smtp-Source: APXvYqwBxZloFdoUpSfEvnOWBCiaBQk2oLSm0bHBCEIEU9v7oS+ou3mgMDolY+DiyFQwhEJ0QnWiuQ== X-Received: by 2002:a5d:548d:: with SMTP id h13mr6704471wrv.218.1556880236018; Fri, 03 May 2019 03:43:56 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id r29sm1716999wra.56.2019.05.03.03.43.55 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 May 2019 03:43:55 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang , "Naveen N . Rao" , Sandipan Das Subject: [PATCH v6 bpf-next 12/17] powerpc: bpf: eliminate zero extension code-gen Date: Fri, 3 May 2019 11:42:39 +0100 Message-Id: <1556880164-10689-13-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> References: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org Cc: Naveen N. Rao Cc: Sandipan Das Signed-off-by: Jiong Wang --- arch/powerpc/net/bpf_jit_comp64.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c index 21a1dcd..9fef73dc 100644 --- a/arch/powerpc/net/bpf_jit_comp64.c +++ b/arch/powerpc/net/bpf_jit_comp64.c @@ -557,9 +557,15 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, goto bpf_alu32_trunc; break; + /* + * ZEXT + */ + case BPF_ALU | BPF_ZEXT: + PPC_RLWINM(dst_reg, dst_reg, 0, 0, 31); + break; bpf_alu32_trunc: /* Truncate to 32-bits */ - if (BPF_CLASS(code) == BPF_ALU) + if (BPF_CLASS(code) == BPF_ALU && !fp->aux->verifier_zext) PPC_RLWINM(dst_reg, dst_reg, 0, 0, 31); break; @@ -1046,6 +1052,11 @@ struct powerpc64_jit_data { struct codegen_context ctx; }; +bool bpf_jit_hardware_zext(void) +{ + return false; +} + struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp) { u32 proglen; From patchwork Fri May 3 10:42:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1094791 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="lGuseTtR"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44wTJc0Nmhz9s6w for ; Fri, 3 May 2019 20:44:00 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727608AbfECKn7 (ORCPT ); Fri, 3 May 2019 06:43:59 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:40392 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727144AbfECKn6 (ORCPT ); Fri, 3 May 2019 06:43:58 -0400 Received: by mail-wm1-f68.google.com with SMTP id h11so6147673wmb.5 for ; Fri, 03 May 2019 03:43:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=E1uglxRyFecLZ4+8so4GbOenEokaEEM+HAxuo+s+1vc=; b=lGuseTtRRTcHSSPfpFgHZOyTSY1+fUabWrvLqxaF3wYYvI1DidnNCrtmLmSQIvxfsT vgf60AEAVmvHnXm5qwFZrMghi172CPBT/TlqQdsgk03voaYCXpcRjlH9HbhC8rWEwoGb bK+b4qvWOEcfsM6JkCM9HeO+RefC05YfVi+iHMndMN+KXlp+wiVVDEV9n3YhnDTHNSl8 g7OtF1vp5WugCtWi2hO7dsYOqpeOaS7rglLtxsvFQ1/zJIREpnk/gJ9ww29TLc3x24jD MSADvs3v7l6X1jDvQwS4LEvnGiClVBMzeGw/wy/XRGuRzknfufQMrrkEo+9HUqzizyOm Q9yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=E1uglxRyFecLZ4+8so4GbOenEokaEEM+HAxuo+s+1vc=; b=KfPBMEQsW4e19Ug//b719D2VtjFQpR6Zu4XPnHh6qBF7rqPQbo1mYuGnvuhBknRhq2 ghWOJLPte5RRvrLt+Ap2nb+aX+fw5FuT8dMslqdgRIs9hlJHqV5NGoO8mcJ3PS5TjpCl qTknzAf7RAbhc0lx6JOT9VF6V9IJEFQ+8gKJPz1Qay9ffcQzlDkfbGvPnxtCzBdc/J89 h5XTSLmrzSxy2ZVF8PS3m0N9iYMba2ItW3PPaG0CJz4PucSxgxaJTZyumDveIEHj3HF6 gnQwZO6b1TsXQJ8RfM8shAAio9drmsWEYKTocMwySw3xWSqCbKu4OYaE6hUtW5FzVNXy HjHA== X-Gm-Message-State: APjAAAUN1dXEJ9kLYCtHkaW+hxM8fQpdA+xEwvI5kdZ05D0V6HCOQWwd tmLPXiOxMTZSP20u1Brf8l6ptQ== X-Google-Smtp-Source: APXvYqzh2ofhMr072gDqrU/ulHXAnV5cagPt/km9f+/WPDKMYJKne/DiRcd38ypS/lGyhm+CyV4qNQ== X-Received: by 2002:a7b:c5c7:: with SMTP id n7mr6001889wmk.9.1556880237144; Fri, 03 May 2019 03:43:57 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id r29sm1716999wra.56.2019.05.03.03.43.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 May 2019 03:43:56 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang , Martin Schwidefsky , Heiko Carstens Subject: [PATCH v6 bpf-next 13/17] s390: bpf: eliminate zero extension code-gen Date: Fri, 3 May 2019 11:42:40 +0100 Message-Id: <1556880164-10689-14-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> References: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org Cc: Martin Schwidefsky Cc: Heiko Carstens Signed-off-by: Jiong Wang --- arch/s390/net/bpf_jit_comp.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c index 51dd026..8315b2e 100644 --- a/arch/s390/net/bpf_jit_comp.c +++ b/arch/s390/net/bpf_jit_comp.c @@ -299,9 +299,11 @@ static inline void reg_set_seen(struct bpf_jit *jit, u32 b1) #define EMIT_ZERO(b1) \ ({ \ - /* llgfr %dst,%dst (zero extend to 64 bit) */ \ - EMIT4(0xb9160000, b1, b1); \ - REG_SET_SEEN(b1); \ + if (!fp->aux->verifier_zext) { \ + /* llgfr %dst,%dst (zero extend to 64 bit) */ \ + EMIT4(0xb9160000, b1, b1); \ + REG_SET_SEEN(b1); \ + } \ }) /* @@ -515,6 +517,13 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i jit->seen |= SEEN_REG_AX; switch (insn->code) { /* + * BPF_ZEXT + */ + case BPF_ALU | BPF_ZEXT: /* dst = (u32) dst */ + /* llgfr %dst,%dst */ + EMIT4(0xb9160000, dst_reg, dst_reg); + break; + /* * BPF_MOV */ case BPF_ALU | BPF_MOV | BPF_X: /* dst = (u32) src */ @@ -1282,6 +1291,11 @@ static int bpf_jit_prog(struct bpf_jit *jit, struct bpf_prog *fp) return 0; } +bool bpf_jit_hardware_zext(void) +{ + return false; +} + /* * Compile eBPF program "fp" */ From patchwork Fri May 3 10:42:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1094801 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="rNyO6h4P"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44wTJx5r2Lz9s9G for ; Fri, 3 May 2019 20:44:17 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727653AbfECKoM (ORCPT ); Fri, 3 May 2019 06:44:12 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:35739 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727605AbfECKoA (ORCPT ); Fri, 3 May 2019 06:44:00 -0400 Received: by mail-wr1-f68.google.com with SMTP id h15so1552709wrb.2 for ; Fri, 03 May 2019 03:43:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=XP0RwQbziCxRtw0HK+xAhkTdFI68wuwmjjzs474PFlM=; b=rNyO6h4PPVEd5LaW8uo+Hd9Gva2gPEgs+27p47gm0YHntR3EHW33cAjYQ5eqKMB/jZ ZMXRQf1RQcmoFqKXUggTCuztk1f7dNCggtNsW5Yz9p5caGPkPmLxLd+7cHyYYlUJ6p3t L08rRUD4eWU8CO9s2+IYns08RCHD/2f2TxlSdXtYEA5lcyBcGceVymA4/EsoAEzVjihO cr6SXmoGa1/vQkbfswlAQmCZjM05N/ZRmgzksocw6ZVR0qcgDhSsB4dw+Rtjr94ZRu3V 719rzDuZ+2fURgvXD7Mnm4R1EVmuE7wDIzJ18O1JSBeeBjx9XAYXgwDDS8pylksnPRx3 O/ZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=XP0RwQbziCxRtw0HK+xAhkTdFI68wuwmjjzs474PFlM=; b=aNQqgE73wxirEn72pXLdymibfAG38rlB8w8Yp2+sU6d1B5eK3mKnzHs50/JGMIH6Zj mymjZ2L0jXIG+0ehJpf5n4WJwTdjdJxFBTmjt1fnCPSd+g7FBOFgyIJXxwbznpVNdpu3 Ss/ku5OAEULb8BU7AWs0mgvcak9qEPiKI50FzXxbPFQB53q7CPXw3DMCFF3vmZUnSsWE ZunVFCYg4D2ugRav3OOgotctezu6QU+9Pk7cLthXMj93rCGXF0jvs9Y5C3ogJuHvlWdf 21YUrJcmDKnZMzqghwEXiqioes3sS2lTXBxNGLUqktAwetY4++tkq/mc5Rnvlwx++PUj 4w5A== X-Gm-Message-State: APjAAAVg8L38R5KK0JjudwE8fR42FXi0gCvg1LemYGuuFNnO0lC8udTc Ies4LLKO7c1mfn1A5Ec7bCeNvA== X-Google-Smtp-Source: APXvYqzbnZ1JAE+CniEsw6ea0nVorWFG8Rlfh2hpHw3PlZj63X3b0PMFKjAtokjYND9xpxLYoBY5bw== X-Received: by 2002:a5d:5501:: with SMTP id b1mr2673789wrv.196.1556880238328; Fri, 03 May 2019 03:43:58 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id r29sm1716999wra.56.2019.05.03.03.43.57 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 May 2019 03:43:57 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang , "David S . Miller" Subject: [PATCH v6 bpf-next 14/17] sparc: bpf: eliminate zero extension code-gen Date: Fri, 3 May 2019 11:42:41 +0100 Message-Id: <1556880164-10689-15-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> References: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Cc: David S. Miller Signed-off-by: Jiong Wang --- arch/sparc/net/bpf_jit_comp_64.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c index 65428e7..8bac761 100644 --- a/arch/sparc/net/bpf_jit_comp_64.c +++ b/arch/sparc/net/bpf_jit_comp_64.c @@ -905,6 +905,10 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) ctx->saw_frame_pointer = true; switch (code) { + /* dst = (u32) dst */ + case BPF_ALU | BPF_ZEXT: + emit_alu_K(SRL, dst, 0, ctx); + break; /* dst = src */ case BPF_ALU | BPF_MOV | BPF_X: emit_alu3_K(SRL, src, 0, dst, ctx); @@ -1144,7 +1148,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) break; do_alu32_trunc: - if (BPF_CLASS(code) == BPF_ALU) + if (BPF_CLASS(code) == BPF_ALU && + !ctx->prog->aux->verifier_zext) emit_alu_K(SRL, dst, 0, ctx); break; @@ -1432,6 +1437,11 @@ static void jit_fill_hole(void *area, unsigned int size) *ptr++ = 0x91d02005; /* ta 5 */ } +bool bpf_jit_hardware_zext(void) +{ + return false; +} + struct sparc64_jit_data { struct bpf_binary_header *header; u8 *image; From patchwork Fri May 3 10:42:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1094796 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="RYxVDuDj"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44wTJh61B7z9s6w for ; Fri, 3 May 2019 20:44:04 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727635AbfECKoD (ORCPT ); Fri, 3 May 2019 06:44:03 -0400 Received: from mail-wm1-f65.google.com ([209.85.128.65]:53704 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727616AbfECKoA (ORCPT ); Fri, 3 May 2019 06:44:00 -0400 Received: by mail-wm1-f65.google.com with SMTP id q15so6529235wmf.3 for ; Fri, 03 May 2019 03:43:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=vUBe6gkkzQclPz2wvAflZLkcO050U9NleScwAGboJ7w=; b=RYxVDuDjtPjfquIluZSOUQoZIhXJAUBa0yFcpEZx2wXadtaW3NGQJgUwuzGWcATRYo 88D70iN8vn/y3pXSFYb4lBoupeLC6AgojmRvcrQ4Sb66ngeBrnDlhMwgcBQfciWaegMg S7pZJtQ0afzIiCGYnv9mZzxL49MJcRUNY+yEb4Gyot/dcp24X1/rXaKPHsxOTbWLtBo9 d5npPYp3ZWknxf8uOogcJwI/PEmcV+4IEmHqZtPvEHX3dXnBJ8/3fnLls4GFYUFWDgZ7 1xixMvcEiRRiUzh5Zh+kRgyBPYCwxo8M6zcq5a9lL7EWNBvDanqQjeDgeWLkjFYt0R5F G24Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=vUBe6gkkzQclPz2wvAflZLkcO050U9NleScwAGboJ7w=; b=Koqltjm4DAfqnnRTvVGEKnIsgC6w8wadG4IU9ibKgxLVLv69F2UhQvCjdBVjmrBHh/ Lg+eiYusl+Ad7SefiAUkKrn6JYVX9dvWqye+8YhFhSKd3r38OYRiXcOwfLSJS97HanDu K+c6+/hfzj7mVRd3nRO0jNpMdk1ejTxiqkdDAJRKSOQP714bEZltuRlDYJu3EbUlnnCM DRESqUbd3gcXWQ5FcULk9u0fVQqQjjYhl1iMUx/Qv3ZMssL4dRly8U8vh0g6PiLtHl6t yJjWZGbsMZivF4vC6xLTi3tZNzWINyZ5Gu8Lnh7dbdjfDg7/qrufxTlRiU1r48eWb1nm lpHw== X-Gm-Message-State: APjAAAWL9Wpu2wXKJ37cwOinppf2+BBqLEaZgLpetn+f4a9wXBkMM3X+ 297Qb1/SXdhKSmyr8Rq7DgOvwQ== X-Google-Smtp-Source: APXvYqw7umtMkMtTfAqpBUISzCEhsJZd1aSA2TMkE8iGCQZFKV12Sg+9TUO/vMkLizW7I2CmGm/wxw== X-Received: by 2002:a1c:1a50:: with SMTP id a77mr5635530wma.113.1556880239238; Fri, 03 May 2019 03:43:59 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id r29sm1716999wra.56.2019.05.03.03.43.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 May 2019 03:43:58 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang , Wang YanQing Subject: [PATCH v6 bpf-next 15/17] x32: bpf: eliminate zero extension code-gen Date: Fri, 3 May 2019 11:42:42 +0100 Message-Id: <1556880164-10689-16-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> References: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org Cc: Wang YanQing Signed-off-by: Jiong Wang --- arch/x86/net/bpf_jit_comp32.c | 39 ++++++++++++++++++++++++++++----------- 1 file changed, 28 insertions(+), 11 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c index 0d9cdff..16c4f4e 100644 --- a/arch/x86/net/bpf_jit_comp32.c +++ b/arch/x86/net/bpf_jit_comp32.c @@ -567,7 +567,7 @@ static inline void emit_ia32_alu_r(const bool is64, const bool hi, const u8 op, static inline void emit_ia32_alu_r64(const bool is64, const u8 op, const u8 dst[], const u8 src[], bool dstk, bool sstk, - u8 **pprog) + u8 **pprog, const struct bpf_prog_aux *aux) { u8 *prog = *pprog; @@ -575,7 +575,7 @@ static inline void emit_ia32_alu_r64(const bool is64, const u8 op, if (is64) emit_ia32_alu_r(is64, true, op, dst_hi, src_hi, dstk, sstk, &prog); - else + else if (!aux->verifier_zext) emit_ia32_mov_i(dst_hi, 0, dstk, &prog); *pprog = prog; } @@ -666,7 +666,8 @@ static inline void emit_ia32_alu_i(const bool is64, const bool hi, const u8 op, /* ALU operation (64 bit) */ static inline void emit_ia32_alu_i64(const bool is64, const u8 op, const u8 dst[], const u32 val, - bool dstk, u8 **pprog) + bool dstk, u8 **pprog, + const struct bpf_prog_aux *aux) { u8 *prog = *pprog; u32 hi = 0; @@ -677,7 +678,7 @@ static inline void emit_ia32_alu_i64(const bool is64, const u8 op, emit_ia32_alu_i(is64, false, op, dst_lo, val, dstk, &prog); if (is64) emit_ia32_alu_i(is64, true, op, dst_hi, hi, dstk, &prog); - else + else if (!aux->verifier_zext) emit_ia32_mov_i(dst_hi, 0, dstk, &prog); *pprog = prog; @@ -1642,6 +1643,10 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, switch (code) { /* ALU operations */ + /* dst = (u32) dst */ + case BPF_ALU | BPF_ZEXT: + emit_ia32_mov_i(dst_hi, 0, dstk, &prog); + break; /* dst = src */ case BPF_ALU | BPF_MOV | BPF_K: case BPF_ALU | BPF_MOV | BPF_X: @@ -1690,11 +1695,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, switch (BPF_SRC(code)) { case BPF_X: emit_ia32_alu_r64(is64, BPF_OP(code), dst, - src, dstk, sstk, &prog); + src, dstk, sstk, &prog, + bpf_prog->aux); break; case BPF_K: emit_ia32_alu_i64(is64, BPF_OP(code), dst, - imm32, dstk, &prog); + imm32, dstk, &prog, + bpf_prog->aux); break; } break; @@ -1713,7 +1720,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, false, &prog); break; } - emit_ia32_mov_i(dst_hi, 0, dstk, &prog); + if (!bpf_prog->aux->verifier_zext) + emit_ia32_mov_i(dst_hi, 0, dstk, &prog); break; case BPF_ALU | BPF_LSH | BPF_X: case BPF_ALU | BPF_RSH | BPF_X: @@ -1733,7 +1741,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, &prog); break; } - emit_ia32_mov_i(dst_hi, 0, dstk, &prog); + if (!bpf_prog->aux->verifier_zext) + emit_ia32_mov_i(dst_hi, 0, dstk, &prog); break; /* dst = dst / src(imm) */ /* dst = dst % src(imm) */ @@ -1755,7 +1764,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, &prog); break; } - emit_ia32_mov_i(dst_hi, 0, dstk, &prog); + if (!bpf_prog->aux->verifier_zext) + emit_ia32_mov_i(dst_hi, 0, dstk, &prog); break; case BPF_ALU64 | BPF_DIV | BPF_K: case BPF_ALU64 | BPF_DIV | BPF_X: @@ -1772,7 +1782,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, EMIT2_off32(0xC7, add_1reg(0xC0, IA32_ECX), imm32); emit_ia32_shift_r(BPF_OP(code), dst_lo, IA32_ECX, dstk, false, &prog); - emit_ia32_mov_i(dst_hi, 0, dstk, &prog); + if (!bpf_prog->aux->verifier_zext) + emit_ia32_mov_i(dst_hi, 0, dstk, &prog); break; /* dst = dst << imm */ case BPF_ALU64 | BPF_LSH | BPF_K: @@ -1808,7 +1819,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, case BPF_ALU | BPF_NEG: emit_ia32_alu_i(is64, false, BPF_OP(code), dst_lo, 0, dstk, &prog); - emit_ia32_mov_i(dst_hi, 0, dstk, &prog); + if (!bpf_prog->aux->verifier_zext) + emit_ia32_mov_i(dst_hi, 0, dstk, &prog); break; /* dst = ~dst (64 bit) */ case BPF_ALU64 | BPF_NEG: @@ -2367,6 +2379,11 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, return proglen; } +bool bpf_jit_hardware_zext(void) +{ + return false; +} + struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) { struct bpf_binary_header *header = NULL; From patchwork Fri May 3 10:42:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1094795 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="B8l1o+Rj"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44wTJh1jzXz9s9G for ; Fri, 3 May 2019 20:44:04 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727143AbfECKoD (ORCPT ); Fri, 3 May 2019 06:44:03 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:46986 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727627AbfECKoC (ORCPT ); Fri, 3 May 2019 06:44:02 -0400 Received: by mail-wr1-f65.google.com with SMTP id r7so7211096wrr.13 for ; Fri, 03 May 2019 03:44:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BnVwXJBAqCiXgdYZn9E37jMtQL+HdDJKMN/6RaQWHWo=; b=B8l1o+RjIJMTzycb6cTh+0X7NlfllMw51rmqD3PgGu9zdm/K3FPY69v/Vme+RB787m 7rql/AbsV1uzxCsH9Zu6zTYWOG1mjZJI498q7euvj1MwAbfQzgcFWgQ4fFmrNacYqqSd u2CbYdkrGU+C3ST+Ly8rGrnc/eFNUx7RtBRgZiS7+UFRBmiWNsrQHugGIXKhBiMCshi3 SEJCyzDxiMrYQsAT2K7cnwRTWbE+98lyyExiYYGUJ+XHVMpcrkW2BKdjryfbNfXrq8gh 1l1VqDQ0o+4P5DIFXH39EbzJYR2UUQ53en5jUvGIrKY2uKhDBA3jRxmEwdXJTdrPio75 5T3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BnVwXJBAqCiXgdYZn9E37jMtQL+HdDJKMN/6RaQWHWo=; b=T6DfJeAvGDQt5OX3PShis93CdKz5KPFsM2CJpTxM0KCm2UIoWTM0GiN3Q6E8k//3Dg rpQWJYEIjmwnat8QK8blukPnOC3ZRukAYTRU4tum1nm0Tq7+51iOWkclCji7TDAez6s5 B5JHPlgMMzL3brVDO5XooV154JtnDdOpVRI5sev5yntPZ00LihZXPQ2imPJZjeHwOTIr oN7eQ+XarurYRgOmZXJlLSLtgQ9MuSElXgpR1idEu0Fx6FI1ZajmPfe8T7X26fNOE+EA uO4RXH0m1VcyajTcWeyfYhkRSQHaX2iTO+zYH6lZxRl21aLr/za9hveAdzBQVZEmORMa Fheg== X-Gm-Message-State: APjAAAVx08/9ltSx/5lYLNpejI/1QfGY/Dr392siulTwdS2eAkPDnBuk iekVcUY8cNUuu95hbZ9abp5GXQ== X-Google-Smtp-Source: APXvYqwk9GL5ui0BbKyQ9fBK1P+no3ubDfLj8FMZtEljeBFDdNAGGmiOKp4txb8VA+aeLHiKb+K5ow== X-Received: by 2002:a5d:6ad1:: with SMTP id u17mr6292505wrw.200.1556880240525; Fri, 03 May 2019 03:44:00 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id r29sm1716999wra.56.2019.05.03.03.43.59 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 May 2019 03:43:59 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v6 bpf-next 16/17] riscv: bpf: eliminate zero extension code-gen Date: Fri, 3 May 2019 11:42:43 +0100 Message-Id: <1556880164-10689-17-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> References: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> MIME-Version: 1.0 Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org Acked-by: Björn Töpel Signed-off-by: Jiong Wang --- arch/riscv/net/bpf_jit_comp.c | 36 +++++++++++++++++++++++------------- 1 file changed, 23 insertions(+), 13 deletions(-) diff --git a/arch/riscv/net/bpf_jit_comp.c b/arch/riscv/net/bpf_jit_comp.c index 80b12aa..3074c9b 100644 --- a/arch/riscv/net/bpf_jit_comp.c +++ b/arch/riscv/net/bpf_jit_comp.c @@ -731,6 +731,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, { bool is64 = BPF_CLASS(insn->code) == BPF_ALU64 || BPF_CLASS(insn->code) == BPF_JMP; + struct bpf_prog_aux *aux = ctx->prog->aux; int rvoff, i = insn - ctx->prog->insnsi; u8 rd = -1, rs = -1, code = insn->code; s16 off = insn->off; @@ -739,11 +740,15 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, init_regs(&rd, &rs, insn, ctx); switch (code) { + /* dst = (u32) dst */ + case BPF_ALU | BPF_ZEXT: + emit_zext_32(rd, ctx); + break; /* dst = src */ case BPF_ALU | BPF_MOV | BPF_X: case BPF_ALU64 | BPF_MOV | BPF_X: emit(is64 ? rv_addi(rd, rs, 0) : rv_addiw(rd, rs, 0), ctx); - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; @@ -771,19 +776,19 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, case BPF_ALU | BPF_MUL | BPF_X: case BPF_ALU64 | BPF_MUL | BPF_X: emit(is64 ? rv_mul(rd, rd, rs) : rv_mulw(rd, rd, rs), ctx); - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_DIV | BPF_X: case BPF_ALU64 | BPF_DIV | BPF_X: emit(is64 ? rv_divu(rd, rd, rs) : rv_divuw(rd, rd, rs), ctx); - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_MOD | BPF_X: case BPF_ALU64 | BPF_MOD | BPF_X: emit(is64 ? rv_remu(rd, rd, rs) : rv_remuw(rd, rd, rs), ctx); - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_LSH | BPF_X: @@ -867,7 +872,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, case BPF_ALU | BPF_MOV | BPF_K: case BPF_ALU64 | BPF_MOV | BPF_K: emit_imm(rd, imm, ctx); - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; @@ -882,7 +887,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit(is64 ? rv_add(rd, rd, RV_REG_T1) : rv_addw(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_SUB | BPF_K: @@ -895,7 +900,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit(is64 ? rv_sub(rd, rd, RV_REG_T1) : rv_subw(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_AND | BPF_K: @@ -906,7 +911,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(rv_and(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_OR | BPF_K: @@ -917,7 +922,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(rv_or(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_XOR | BPF_K: @@ -928,7 +933,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(rv_xor(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_MUL | BPF_K: @@ -936,7 +941,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(is64 ? rv_mul(rd, rd, RV_REG_T1) : rv_mulw(rd, rd, RV_REG_T1), ctx); - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_DIV | BPF_K: @@ -944,7 +949,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(is64 ? rv_divu(rd, rd, RV_REG_T1) : rv_divuw(rd, rd, RV_REG_T1), ctx); - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_MOD | BPF_K: @@ -952,7 +957,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(is64 ? rv_remu(rd, rd, RV_REG_T1) : rv_remuw(rd, rd, RV_REG_T1), ctx); - if (!is64) + if (!is64 && !aux->verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_LSH | BPF_K: @@ -1503,6 +1508,11 @@ static void bpf_flush_icache(void *start, void *end) flush_icache_range((unsigned long)start, (unsigned long)end); } +bool bpf_jit_hardware_zext(void) +{ + return false; +} + struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) { bool tmp_blinded = false, extra_pass = false; From patchwork Fri May 3 10:42:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1094799 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="K2nm001T"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44wTJp0gBLz9s55 for ; Fri, 3 May 2019 20:44:10 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727645AbfECKoI (ORCPT ); Fri, 3 May 2019 06:44:08 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:33078 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727631AbfECKoE (ORCPT ); Fri, 3 May 2019 06:44:04 -0400 Received: by mail-wr1-f65.google.com with SMTP id e28so7300210wra.0 for ; Fri, 03 May 2019 03:44:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=PbSXjy3xqg2/4bPZ6TFNBP6P2iMvPVVWPTeoc6hgQFM=; b=K2nm001TiTjmiSYRshzemTK+2srsZm5sHtvYm1fatvSMzYn5pp1J23xV/lFxx4tl1q OTRJaMitP6zXiqNssLsI9QW++vwKWai2GVzh8ftmaEzk07trQRA+BXRjjJ4ywMdGd7gA bQZlJEtnl26mCL3dBUvfKFxDtOEPu+f5l9+E63qkoME+1DRRaSU1WGqiIkYHCU1yMT7J r7dSnPukfac9DauKo+2NX6eXTkgUz5KQ/vaqgMxs+Y5qYgr0fWqnou47QZ6HXjHiqocU LF9y32wyxFMWM8rSsJYrgEK8PjNesJnZ2RGT4o2cb/F9LgFCgdQRZICLR+EGG9R9AFji sIKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=PbSXjy3xqg2/4bPZ6TFNBP6P2iMvPVVWPTeoc6hgQFM=; b=ZFBjWpzgfTSZGhNMCQ88rKiVLxCmGxqoF7iEv2L2x1JH6I3V0zIfcoZHKl9YTiSpWL 1j1u88vvTNWJOkPLPYZvCIO780RMfovb2M8mGr24N48bc0UWl1q0wSJj8M+RDdryDwMQ 8Qm7ilLbd+NVd/DngecyLveQAp3odJUQo+exhdr8QlnOCIpEBzFO0VyUVcd/DMkrveCT Eg8/sItvs3wyaog17qpovlfXA49wsd1mZelh6gpC1YDNI/FveCucZMCcx/qaKFrLz5YS 75BpUqxAu1Td1GLy397X7TXY/73K/Oqd5CKbpGTbhro9C7who9pXkXTqA2+l81v9krpe 7A0A== X-Gm-Message-State: APjAAAXdssGh38AKrTLMOAX8ClilusBIPnnoWxNrAvNl7u1bTFIWREwm 88dXw1ncKK455nN9S5m1fsRluA== X-Google-Smtp-Source: APXvYqx5Dfzp4UDvZaa3ghRasMtKQ9lUVFtwsy0yUw+j17M6erDbOTAreduT3efP+RIxWBuhGPgvOg== X-Received: by 2002:adf:fb0d:: with SMTP id c13mr6251216wrr.214.1556880241573; Fri, 03 May 2019 03:44:01 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id r29sm1716999wra.56.2019.05.03.03.44.00 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 May 2019 03:44:00 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH v6 bpf-next 17/17] nfp: bpf: eliminate zero extension code-gen Date: Fri, 3 May 2019 11:42:44 +0100 Message-Id: <1556880164-10689-18-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> References: <1556880164-10689-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch eliminate zero extension code-gen for instructions including both alu and load/store. The only exception is for ctx load, because offload target doesn't go through host ctx convert logic so we do customized load and ignores zext flag set by verifier. Reviewed-by: Jakub Kicinski Signed-off-by: Jiong Wang --- drivers/net/ethernet/netronome/nfp/bpf/jit.c | 115 +++++++++++++--------- drivers/net/ethernet/netronome/nfp/bpf/main.h | 2 + drivers/net/ethernet/netronome/nfp/bpf/verifier.c | 12 +++ 3 files changed, 81 insertions(+), 48 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/bpf/jit.c b/drivers/net/ethernet/netronome/nfp/bpf/jit.c index f272247..634bae0 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/jit.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/jit.c @@ -612,6 +612,13 @@ static void wrp_immed(struct nfp_prog *nfp_prog, swreg dst, u32 imm) } static void +wrp_zext(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, u8 dst) +{ + if (meta->flags & FLAG_INSN_DO_ZEXT) + wrp_immed(nfp_prog, reg_both(dst + 1), 0); +} + +static void wrp_immed_relo(struct nfp_prog *nfp_prog, swreg dst, u32 imm, enum nfp_relo_type relo) { @@ -847,7 +854,8 @@ static int nfp_cpp_memcpy(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) } static int -data_ld(struct nfp_prog *nfp_prog, swreg offset, u8 dst_gpr, int size) +data_ld(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, swreg offset, + u8 dst_gpr, int size) { unsigned int i; u16 shift, sz; @@ -870,14 +878,15 @@ data_ld(struct nfp_prog *nfp_prog, swreg offset, u8 dst_gpr, int size) wrp_mov(nfp_prog, reg_both(dst_gpr + i), reg_xfer(i)); if (i < 2) - wrp_immed(nfp_prog, reg_both(dst_gpr + 1), 0); + wrp_zext(nfp_prog, meta, dst_gpr); return 0; } static int -data_ld_host_order(struct nfp_prog *nfp_prog, u8 dst_gpr, - swreg lreg, swreg rreg, int size, enum cmd_mode mode) +data_ld_host_order(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + u8 dst_gpr, swreg lreg, swreg rreg, int size, + enum cmd_mode mode) { unsigned int i; u8 mask, sz; @@ -900,33 +909,34 @@ data_ld_host_order(struct nfp_prog *nfp_prog, u8 dst_gpr, wrp_mov(nfp_prog, reg_both(dst_gpr + i), reg_xfer(i)); if (i < 2) - wrp_immed(nfp_prog, reg_both(dst_gpr + 1), 0); + wrp_zext(nfp_prog, meta, dst_gpr); return 0; } static int -data_ld_host_order_addr32(struct nfp_prog *nfp_prog, u8 src_gpr, swreg offset, - u8 dst_gpr, u8 size) +data_ld_host_order_addr32(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + u8 src_gpr, swreg offset, u8 dst_gpr, u8 size) { - return data_ld_host_order(nfp_prog, dst_gpr, reg_a(src_gpr), offset, - size, CMD_MODE_32b); + return data_ld_host_order(nfp_prog, meta, dst_gpr, reg_a(src_gpr), + offset, size, CMD_MODE_32b); } static int -data_ld_host_order_addr40(struct nfp_prog *nfp_prog, u8 src_gpr, swreg offset, - u8 dst_gpr, u8 size) +data_ld_host_order_addr40(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + u8 src_gpr, swreg offset, u8 dst_gpr, u8 size) { swreg rega, regb; addr40_offset(nfp_prog, src_gpr, offset, ®a, ®b); - return data_ld_host_order(nfp_prog, dst_gpr, rega, regb, + return data_ld_host_order(nfp_prog, meta, dst_gpr, rega, regb, size, CMD_MODE_40b_BA); } static int -construct_data_ind_ld(struct nfp_prog *nfp_prog, u16 offset, u16 src, u8 size) +construct_data_ind_ld(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + u16 offset, u16 src, u8 size) { swreg tmp_reg; @@ -942,10 +952,12 @@ construct_data_ind_ld(struct nfp_prog *nfp_prog, u16 offset, u16 src, u8 size) emit_br_relo(nfp_prog, BR_BLO, BR_OFF_RELO, 0, RELO_BR_GO_ABORT); /* Load data */ - return data_ld(nfp_prog, imm_b(nfp_prog), 0, size); + return data_ld(nfp_prog, meta, imm_b(nfp_prog), 0, size); } -static int construct_data_ld(struct nfp_prog *nfp_prog, u16 offset, u8 size) +static int +construct_data_ld(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + u16 offset, u8 size) { swreg tmp_reg; @@ -956,7 +968,7 @@ static int construct_data_ld(struct nfp_prog *nfp_prog, u16 offset, u8 size) /* Load data */ tmp_reg = re_load_imm_any(nfp_prog, offset, imm_b(nfp_prog)); - return data_ld(nfp_prog, tmp_reg, 0, size); + return data_ld(nfp_prog, meta, tmp_reg, 0, size); } static int @@ -1193,7 +1205,7 @@ mem_op_stack(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, } if (clr_gpr && size < 8) - wrp_immed(nfp_prog, reg_both(gpr + 1), 0); + wrp_zext(nfp_prog, meta, gpr); while (size) { u32 slice_end; @@ -1294,9 +1306,10 @@ wrp_alu32_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, enum alu_op alu_op) { const struct bpf_insn *insn = &meta->insn; + u8 dst = insn->dst_reg * 2; - wrp_alu_imm(nfp_prog, insn->dst_reg * 2, alu_op, insn->imm); - wrp_immed(nfp_prog, reg_both(insn->dst_reg * 2 + 1), 0); + wrp_alu_imm(nfp_prog, dst, alu_op, insn->imm); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -1308,7 +1321,7 @@ wrp_alu32_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, u8 dst = meta->insn.dst_reg * 2, src = meta->insn.src_reg * 2; emit_alu(nfp_prog, reg_both(dst), reg_a(dst), alu_op, reg_b(src)); - wrp_immed(nfp_prog, reg_both(meta->insn.dst_reg * 2 + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2385,12 +2398,14 @@ static int neg_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) u8 dst = meta->insn.dst_reg * 2; emit_alu(nfp_prog, reg_both(dst), reg_imm(0), ALU_OP_SUB, reg_b(dst)); - wrp_immed(nfp_prog, reg_both(meta->insn.dst_reg * 2 + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } -static int __ashr_imm(struct nfp_prog *nfp_prog, u8 dst, u8 shift_amt) +static int +__ashr_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, u8 dst, + u8 shift_amt) { if (shift_amt) { /* Set signedness bit (MSB of result). */ @@ -2399,7 +2414,7 @@ static int __ashr_imm(struct nfp_prog *nfp_prog, u8 dst, u8 shift_amt) emit_shf(nfp_prog, reg_both(dst), reg_none(), SHF_OP_ASHR, reg_b(dst), SHF_SC_R_SHF, shift_amt); } - wrp_immed(nfp_prog, reg_both(dst + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2414,7 +2429,7 @@ static int ashr_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) umin = meta->umin_src; umax = meta->umax_src; if (umin == umax) - return __ashr_imm(nfp_prog, dst, umin); + return __ashr_imm(nfp_prog, meta, dst, umin); src = insn->src_reg * 2; /* NOTE: the first insn will set both indirect shift amount (source A) @@ -2423,7 +2438,7 @@ static int ashr_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) emit_alu(nfp_prog, reg_none(), reg_a(src), ALU_OP_OR, reg_b(dst)); emit_shf_indir(nfp_prog, reg_both(dst), reg_none(), SHF_OP_ASHR, reg_b(dst), SHF_SC_R_SHF); - wrp_immed(nfp_prog, reg_both(dst + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2433,15 +2448,17 @@ static int ashr_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) const struct bpf_insn *insn = &meta->insn; u8 dst = insn->dst_reg * 2; - return __ashr_imm(nfp_prog, dst, insn->imm); + return __ashr_imm(nfp_prog, meta, dst, insn->imm); } -static int __shr_imm(struct nfp_prog *nfp_prog, u8 dst, u8 shift_amt) +static int +__shr_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, u8 dst, + u8 shift_amt) { if (shift_amt) emit_shf(nfp_prog, reg_both(dst), reg_none(), SHF_OP_NONE, reg_b(dst), SHF_SC_R_SHF, shift_amt); - wrp_immed(nfp_prog, reg_both(dst + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2450,7 +2467,7 @@ static int shr_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) const struct bpf_insn *insn = &meta->insn; u8 dst = insn->dst_reg * 2; - return __shr_imm(nfp_prog, dst, insn->imm); + return __shr_imm(nfp_prog, meta, dst, insn->imm); } static int shr_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) @@ -2463,22 +2480,24 @@ static int shr_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) umin = meta->umin_src; umax = meta->umax_src; if (umin == umax) - return __shr_imm(nfp_prog, dst, umin); + return __shr_imm(nfp_prog, meta, dst, umin); src = insn->src_reg * 2; emit_alu(nfp_prog, reg_none(), reg_a(src), ALU_OP_OR, reg_imm(0)); emit_shf_indir(nfp_prog, reg_both(dst), reg_none(), SHF_OP_NONE, reg_b(dst), SHF_SC_R_SHF); - wrp_immed(nfp_prog, reg_both(dst + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } -static int __shl_imm(struct nfp_prog *nfp_prog, u8 dst, u8 shift_amt) +static int +__shl_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, u8 dst, + u8 shift_amt) { if (shift_amt) emit_shf(nfp_prog, reg_both(dst), reg_none(), SHF_OP_NONE, reg_b(dst), SHF_SC_L_SHF, shift_amt); - wrp_immed(nfp_prog, reg_both(dst + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2487,7 +2506,7 @@ static int shl_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) const struct bpf_insn *insn = &meta->insn; u8 dst = insn->dst_reg * 2; - return __shl_imm(nfp_prog, dst, insn->imm); + return __shl_imm(nfp_prog, meta, dst, insn->imm); } static int shl_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) @@ -2500,11 +2519,11 @@ static int shl_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) umin = meta->umin_src; umax = meta->umax_src; if (umin == umax) - return __shl_imm(nfp_prog, dst, umin); + return __shl_imm(nfp_prog, meta, dst, umin); src = insn->src_reg * 2; shl_reg64_lt32_low(nfp_prog, dst, src); - wrp_immed(nfp_prog, reg_both(dst + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2566,34 +2585,34 @@ static int imm_ld8(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) static int data_ld1(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { - return construct_data_ld(nfp_prog, meta->insn.imm, 1); + return construct_data_ld(nfp_prog, meta, meta->insn.imm, 1); } static int data_ld2(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { - return construct_data_ld(nfp_prog, meta->insn.imm, 2); + return construct_data_ld(nfp_prog, meta, meta->insn.imm, 2); } static int data_ld4(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { - return construct_data_ld(nfp_prog, meta->insn.imm, 4); + return construct_data_ld(nfp_prog, meta, meta->insn.imm, 4); } static int data_ind_ld1(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { - return construct_data_ind_ld(nfp_prog, meta->insn.imm, + return construct_data_ind_ld(nfp_prog, meta, meta->insn.imm, meta->insn.src_reg * 2, 1); } static int data_ind_ld2(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { - return construct_data_ind_ld(nfp_prog, meta->insn.imm, + return construct_data_ind_ld(nfp_prog, meta, meta->insn.imm, meta->insn.src_reg * 2, 2); } static int data_ind_ld4(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { - return construct_data_ind_ld(nfp_prog, meta->insn.imm, + return construct_data_ind_ld(nfp_prog, meta, meta->insn.imm, meta->insn.src_reg * 2, 4); } @@ -2671,7 +2690,7 @@ mem_ldx_data(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, tmp_reg = re_load_imm_any(nfp_prog, meta->insn.off, imm_b(nfp_prog)); - return data_ld_host_order_addr32(nfp_prog, meta->insn.src_reg * 2, + return data_ld_host_order_addr32(nfp_prog, meta, meta->insn.src_reg * 2, tmp_reg, meta->insn.dst_reg * 2, size); } @@ -2683,7 +2702,7 @@ mem_ldx_emem(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, tmp_reg = re_load_imm_any(nfp_prog, meta->insn.off, imm_b(nfp_prog)); - return data_ld_host_order_addr40(nfp_prog, meta->insn.src_reg * 2, + return data_ld_host_order_addr40(nfp_prog, meta, meta->insn.src_reg * 2, tmp_reg, meta->insn.dst_reg * 2, size); } @@ -2744,7 +2763,7 @@ mem_ldx_data_from_pktcache_unaligned(struct nfp_prog *nfp_prog, wrp_reg_subpart(nfp_prog, dst_lo, src_lo, len_lo, off); if (!len_mid) { - wrp_immed(nfp_prog, dst_hi, 0); + wrp_zext(nfp_prog, meta, dst_gpr); return 0; } @@ -2752,7 +2771,7 @@ mem_ldx_data_from_pktcache_unaligned(struct nfp_prog *nfp_prog, if (size <= REG_WIDTH) { wrp_reg_or_subpart(nfp_prog, dst_lo, src_mid, len_mid, len_lo); - wrp_immed(nfp_prog, dst_hi, 0); + wrp_zext(nfp_prog, meta, dst_gpr); } else { swreg src_hi = reg_xfer(idx + 2); @@ -2783,10 +2802,10 @@ mem_ldx_data_from_pktcache_aligned(struct nfp_prog *nfp_prog, if (size < REG_WIDTH) { wrp_reg_subpart(nfp_prog, dst_lo, src_lo, size, 0); - wrp_immed(nfp_prog, dst_hi, 0); + wrp_zext(nfp_prog, meta, dst_gpr); } else if (size == REG_WIDTH) { wrp_mov(nfp_prog, dst_lo, src_lo); - wrp_immed(nfp_prog, dst_hi, 0); + wrp_zext(nfp_prog, meta, dst_gpr); } else { swreg src_hi = reg_xfer(idx + 1); diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.h b/drivers/net/ethernet/netronome/nfp/bpf/main.h index e54d1ac..57d6ff5 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/main.h +++ b/drivers/net/ethernet/netronome/nfp/bpf/main.h @@ -238,6 +238,8 @@ struct nfp_bpf_reg_state { #define FLAG_INSN_SKIP_PREC_DEPENDENT BIT(4) /* Instruction is optimized by the verifier */ #define FLAG_INSN_SKIP_VERIFIER_OPT BIT(5) +/* Instruction needs to zero extend to high 32-bit */ +#define FLAG_INSN_DO_ZEXT BIT(6) #define FLAG_INSN_SKIP_MASK (FLAG_INSN_SKIP_NOOP | \ FLAG_INSN_SKIP_PREC_DEPENDENT | \ diff --git a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c index 36f56eb..e92ee51 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c @@ -744,6 +744,17 @@ static unsigned int nfp_bpf_get_stack_usage(struct nfp_prog *nfp_prog) goto continue_subprog; } +static void nfp_bpf_insn_flag_zext(struct nfp_prog *nfp_prog, + struct bpf_insn_aux_data *aux) +{ + struct nfp_insn_meta *meta; + + list_for_each_entry(meta, &nfp_prog->insns, l) { + if (aux[meta->n].zext_dst) + meta->flags |= FLAG_INSN_DO_ZEXT; + } +} + int nfp_bpf_finalize(struct bpf_verifier_env *env) { struct bpf_subprog_info *info; @@ -784,6 +795,7 @@ int nfp_bpf_finalize(struct bpf_verifier_env *env) return -EOPNOTSUPP; } + nfp_bpf_insn_flag_zext(nfp_prog, env->insn_aux_data); return 0; }