From patchwork Fri May 8 18:15:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luke Nelson X-Patchwork-Id: 1286383 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=cs.washington.edu Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=cs.washington.edu header.i=@cs.washington.edu header.a=rsa-sha256 header.s=goo201206 header.b=I6sVrCye; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Jdn11sqnz9sSk for ; Sat, 9 May 2020 04:16:05 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727798AbgEHSQE (ORCPT ); Fri, 8 May 2020 14:16:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1727769AbgEHSQB (ORCPT ); Fri, 8 May 2020 14:16:01 -0400 Received: from mail-pj1-x1042.google.com (mail-pj1-x1042.google.com [IPv6:2607:f8b0:4864:20::1042]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C71CDC05BD43 for ; Fri, 8 May 2020 11:16:01 -0700 (PDT) Received: by mail-pj1-x1042.google.com with SMTP id js4so679863pjb.5 for ; Fri, 08 May 2020 11:16:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cs.washington.edu; s=goo201206; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+grFk/1b6m5GXQB6Do7/JZsbw7ECp6r+S8upQGe4+gI=; b=I6sVrCyeN1Al9+YXglL0XwffqWho/yTDT1r7S/K6Apz4jUPalQFiMfGpHSI54ALlc+ V6dIZM6i+WLjQlh5ywJ5byFfTeg9EfIH49M7i84H/uOfLRN+vZzIfo59Q3OEXEIJQEsx 7x2tnn1OPyog3FsuhfzlEBVIKdmsUugJfu5W4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+grFk/1b6m5GXQB6Do7/JZsbw7ECp6r+S8upQGe4+gI=; b=Ckqn3fzLEdQW3RjNPQNGGBWICWHvcj/Z00idTsI3+iBEjx5+k4RqbSNrm9AsRmwD6b siyokQJbUboFCCPHNDB5ZpJeZtSpJPEwuR7Fblp/wHByxKqeSl4766b045WgOz0YqmFC D2QzlN9Z1GEyCqpK5Ns+PJmWkhbedU3fr4jPuDH5BFa9SyIHh46I9NCHLInrjvav3SmJ vRZRSMLyD2kzP3Ov5TvkS5r9jjPECMKfOF4CJb8fya3p3jAEcSAZT2f6iPXKj/LD+/M+ 1tjPP849edXkAlaFDpnE8q+aGVyDLVaVVRzl5iEw/LjGX0jrsI83xo9javsbBqN0WOYJ o4Bw== X-Gm-Message-State: AGi0PuanxW6dylHCaGJKQR6vJS5XtQENQ8cYTyeHNFE2m4+0+14sFGNr 3V6qfv2xlOerRYmQK73hos3cGHfEVh2Hfg== X-Google-Smtp-Source: APiQypKtbeWNjjqucMJ+BC3YWrrXzqesmFdn18zvEh3fztWZsGRRr05NnDcv2TODGa7UTSA8QZCmwg== X-Received: by 2002:a17:902:6114:: with SMTP id t20mr3762825plj.324.1588961760860; Fri, 08 May 2020 11:16:00 -0700 (PDT) Received: from localhost.localdomain (c-73-53-94-119.hsd1.wa.comcast.net. [73.53.94.119]) by smtp.gmail.com with ESMTPSA id e11sm2349463pfl.85.2020.05.08.11.15.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 May 2020 11:16:00 -0700 (PDT) From: Luke Nelson X-Google-Original-From: Luke Nelson To: bpf@vger.kernel.org Cc: Luke Nelson , Xi Wang , Catalin Marinas , Will Deacon , Daniel Borkmann , Alexei Starovoitov , Zi Shen Lim , Martin KaFai Lau , Song Liu , Yonghong Song , Andrii Nakryiko , John Fastabend , KP Singh , Mark Rutland , Alexios Zavras , Thomas Gleixner , Christoffer Dall , Marc Zyngier , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, clang-built-linux@googlegroups.com Subject: [PATCH bpf-next v2 1/3] arm64: insn: Fix two bugs in encoding 32-bit logical immediates Date: Fri, 8 May 2020 11:15:44 -0700 Message-Id: <20200508181547.24783-2-luke.r.nels@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200508181547.24783-1-luke.r.nels@gmail.com> References: <20200508181547.24783-1-luke.r.nels@gmail.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org This patch fixes two issues present in the current function for encoding arm64 logical immediates when using the 32-bit variants of instructions. First, the code does not correctly reject an all-ones 32-bit immediate, and returns an undefined instruction encoding. Second, the code incorrectly rejects some 32-bit immediates that are actually encodable as logical immediates. The root cause is that the code uses a default mask of 64-bit all-ones, even for 32-bit immediates. This causes an issue later on when the default mask is used to fill the top bits of the immediate with ones, shown here: /* * Pattern: 0..01..10..01..1 * * Fill the unused top bits with ones, and check if * the result is a valid immediate (all ones with a * contiguous ranges of zeroes). */ imm |= ~mask; if (!range_of_ones(~imm)) return AARCH64_BREAK_FAULT; To see the problem, consider an immediate of the form 0..01..10..01..1, where the upper 32 bits are zero, such as 0x80000001. The code checks if ~(imm | ~mask) contains a range of ones: the incorrect mask yields 1..10..01..10..0, which fails the check; the correct mask yields 0..01..10..0, which succeeds. The fix for both issues is to generate a correct mask based on the instruction immediate size, and use the mask to check for all-ones, all-zeroes, and values wider than the mask. Currently, arch/arm64/kvm/va_layout.c is the only user of this function, which uses 64-bit immediates and therefore won't trigger these bugs. We tested the new code against llvm-mc with all 1,302 encodable 32-bit logical immediates and all 5,334 encodable 64-bit logical immediates. Fixes: ef3935eeebff ("arm64: insn: Add encoder for bitwise operations using literals") Co-developed-by: Xi Wang Signed-off-by: Xi Wang Signed-off-by: Luke Nelson Reviewed-by: Marc Zyngier Suggested-by: Will Deacon --- arch/arm64/kernel/insn.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c index 4a9e773a177f..cc2f3d901c91 100644 --- a/arch/arm64/kernel/insn.c +++ b/arch/arm64/kernel/insn.c @@ -1535,16 +1535,10 @@ static u32 aarch64_encode_immediate(u64 imm, u32 insn) { unsigned int immr, imms, n, ones, ror, esz, tmp; - u64 mask = ~0UL; - - /* Can't encode full zeroes or full ones */ - if (!imm || !~imm) - return AARCH64_BREAK_FAULT; + u64 mask; switch (variant) { case AARCH64_INSN_VARIANT_32BIT: - if (upper_32_bits(imm)) - return AARCH64_BREAK_FAULT; esz = 32; break; case AARCH64_INSN_VARIANT_64BIT: @@ -1556,6 +1550,12 @@ static u32 aarch64_encode_immediate(u64 imm, return AARCH64_BREAK_FAULT; } + mask = GENMASK(esz - 1, 0); + + /* Can't encode full zeroes, full ones, or value wider than the mask */ + if (!imm || imm == mask || imm & ~mask) + return AARCH64_BREAK_FAULT; + /* * Inverse of Replicate(). Try to spot a repeating pattern * with a pow2 stride. From patchwork Fri May 8 18:15:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luke Nelson X-Patchwork-Id: 1286386 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=cs.washington.edu Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=cs.washington.edu header.i=@cs.washington.edu header.a=rsa-sha256 header.s=goo201206 header.b=lSt6/YnZ; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Jdn23Hd4z9sSr for ; Sat, 9 May 2020 04:16:06 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727845AbgEHSQF (ORCPT ); Fri, 8 May 2020 14:16:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1727788AbgEHSQE (ORCPT ); Fri, 8 May 2020 14:16:04 -0400 Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2644FC05BD09 for ; Fri, 8 May 2020 11:16:04 -0700 (PDT) Received: by mail-pg1-x542.google.com with SMTP id 207so1225015pgc.6 for ; Fri, 08 May 2020 11:16:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cs.washington.edu; s=goo201206; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=QNHfNzbVU0PwOL9SMJsHOvS8VR9Nkm49Q+JbsICKVBg=; b=lSt6/YnZZSbQrQgUgCitEjIy1NaXh9eqislb+AXqXdOIXgD+sWVeDy/U+Cagq8aI8C grBm7OvnGpKsQ6eQXmw0VgY3/PqLP+f2wuJd62XTIEiibArJ9VhiEyDWKeuwBt+xyWvy rjqYirbVCxNBT1OYuAmC8h+MyumhSTUPGFiFk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=QNHfNzbVU0PwOL9SMJsHOvS8VR9Nkm49Q+JbsICKVBg=; b=KF9bU4LpQpUoEAQei/s6wvAjWtolVg4kd1HeW1byMc1bGGkmBqupnF9FrKhiSbfnH4 20iQ2CEf+TJFjonJlaUmAoAdRNeNikZU4u+2K8mxAzowSoZBhYifFZvakEdmYx82tCia TD3jKaaAS+mJSUmWJ0sK66k6qr8UI84CFiUr1crm5LN69d3nW9ZCTc0UtEMISry3EfgN 4tcxoRcGuxjLKoXDzwJ9lt9/9c7T9ZHl9loni+wb+a/ttLlUEwPUAednWdHaq1lWL5Xp TGSSoBGawAO+/UkIoFplNCU9AKktfTLWKYhimDiKSSEAU+wByZbEzTgvYG84es1yCQLK qVfw== X-Gm-Message-State: AGi0PuZ25wCV04zyeFvpKh/PSD9aMql3I76nLQgKM4JkNi8aXJdY6iht gfz58/0RcvlVKIOUT13dCzzPQ2vq6FCt+g== X-Google-Smtp-Source: APiQypL6mCb8hthDp9DwZO0MjCS8e0X4SVZU29TXxPDkXQ5usopWSeukTVWvFzjCdNLs6FR8d/63Aw== X-Received: by 2002:a62:3181:: with SMTP id x123mr4058797pfx.109.1588961763199; Fri, 08 May 2020 11:16:03 -0700 (PDT) Received: from localhost.localdomain (c-73-53-94-119.hsd1.wa.comcast.net. [73.53.94.119]) by smtp.gmail.com with ESMTPSA id e11sm2349463pfl.85.2020.05.08.11.16.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 May 2020 11:16:02 -0700 (PDT) From: Luke Nelson X-Google-Original-From: Luke Nelson To: bpf@vger.kernel.org Cc: Luke Nelson , Xi Wang , Catalin Marinas , Will Deacon , Daniel Borkmann , Alexei Starovoitov , Zi Shen Lim , Martin KaFai Lau , Song Liu , Yonghong Song , Andrii Nakryiko , John Fastabend , KP Singh , Mark Rutland , Ard Biesheuvel , Torsten Duwe , Greg Kroah-Hartman , Enrico Weigelt , Thomas Gleixner , Christoffer Dall , Marc Zyngier , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, clang-built-linux@googlegroups.com Subject: [PATCH bpf-next v2 2/3] bpf, arm64: Optimize AND, OR, XOR, JSET BPF_K using arm64 logical immediates Date: Fri, 8 May 2020 11:15:45 -0700 Message-Id: <20200508181547.24783-3-luke.r.nels@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200508181547.24783-1-luke.r.nels@gmail.com> References: <20200508181547.24783-1-luke.r.nels@gmail.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org The current code for BPF_{AND,OR,XOR,JSET} BPF_K loads the immediate to a temporary register before use. This patch changes the code to avoid using a temporary register when the BPF immediate is encodable using an arm64 logical immediate instruction. If the encoding fails (due to the immediate not being encodable), it falls back to using a temporary register. Example of generated code for BPF_ALU32_IMM(BPF_AND, R0, 0x80000001): without optimization: 24: mov w10, #0x8000ffff 28: movk w10, #0x1 2c: and w7, w7, w10 with optimization: 24: and w7, w7, #0x80000001 Since the encoding process is quite complex, the JIT reuses existing functionality in arch/arm64/kernel/insn.c for encoding logical immediates rather than duplicate it in the JIT. Co-developed-by: Xi Wang Signed-off-by: Xi Wang Signed-off-by: Luke Nelson Acked-by: Daniel Borkmann --- arch/arm64/net/bpf_jit.h | 14 +++++++++++++ arch/arm64/net/bpf_jit_comp.c | 37 +++++++++++++++++++++++++++-------- 2 files changed, 43 insertions(+), 8 deletions(-) diff --git a/arch/arm64/net/bpf_jit.h b/arch/arm64/net/bpf_jit.h index eb73f9f72c46..f36a779949e6 100644 --- a/arch/arm64/net/bpf_jit.h +++ b/arch/arm64/net/bpf_jit.h @@ -189,4 +189,18 @@ /* Rn & Rm; set condition flags */ #define A64_TST(sf, Rn, Rm) A64_ANDS(sf, A64_ZR, Rn, Rm) +/* Logical (immediate) */ +#define A64_LOGIC_IMM(sf, Rd, Rn, imm, type) ({ \ + u64 imm64 = (sf) ? (u64)imm : (u64)(u32)imm; \ + aarch64_insn_gen_logical_immediate(AARCH64_INSN_LOGIC_##type, \ + A64_VARIANT(sf), Rn, Rd, imm64); \ +}) +/* Rd = Rn OP imm */ +#define A64_AND_I(sf, Rd, Rn, imm) A64_LOGIC_IMM(sf, Rd, Rn, imm, AND) +#define A64_ORR_I(sf, Rd, Rn, imm) A64_LOGIC_IMM(sf, Rd, Rn, imm, ORR) +#define A64_EOR_I(sf, Rd, Rn, imm) A64_LOGIC_IMM(sf, Rd, Rn, imm, EOR) +#define A64_ANDS_I(sf, Rd, Rn, imm) A64_LOGIC_IMM(sf, Rd, Rn, imm, AND_SETFLAGS) +/* Rn & imm; set condition flags */ +#define A64_TST_I(sf, Rn, imm) A64_ANDS_I(sf, A64_ZR, Rn, imm) + #endif /* _BPF_JIT_H */ diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index cdc79de0c794..083e5d8a5e2c 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -356,6 +356,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, const bool isdw = BPF_SIZE(code) == BPF_DW; u8 jmp_cond, reg; s32 jmp_offset; + u32 a64_insn; #define check_imm(bits, imm) do { \ if ((((imm) > 0) && ((imm) >> (bits))) || \ @@ -488,18 +489,33 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, break; case BPF_ALU | BPF_AND | BPF_K: case BPF_ALU64 | BPF_AND | BPF_K: - emit_a64_mov_i(is64, tmp, imm, ctx); - emit(A64_AND(is64, dst, dst, tmp), ctx); + a64_insn = A64_AND_I(is64, dst, dst, imm); + if (a64_insn != AARCH64_BREAK_FAULT) { + emit(a64_insn, ctx); + } else { + emit_a64_mov_i(is64, tmp, imm, ctx); + emit(A64_AND(is64, dst, dst, tmp), ctx); + } break; case BPF_ALU | BPF_OR | BPF_K: case BPF_ALU64 | BPF_OR | BPF_K: - emit_a64_mov_i(is64, tmp, imm, ctx); - emit(A64_ORR(is64, dst, dst, tmp), ctx); + a64_insn = A64_ORR_I(is64, dst, dst, imm); + if (a64_insn != AARCH64_BREAK_FAULT) { + emit(a64_insn, ctx); + } else { + emit_a64_mov_i(is64, tmp, imm, ctx); + emit(A64_ORR(is64, dst, dst, tmp), ctx); + } break; case BPF_ALU | BPF_XOR | BPF_K: case BPF_ALU64 | BPF_XOR | BPF_K: - emit_a64_mov_i(is64, tmp, imm, ctx); - emit(A64_EOR(is64, dst, dst, tmp), ctx); + a64_insn = A64_EOR_I(is64, dst, dst, imm); + if (a64_insn != AARCH64_BREAK_FAULT) { + emit(a64_insn, ctx); + } else { + emit_a64_mov_i(is64, tmp, imm, ctx); + emit(A64_EOR(is64, dst, dst, tmp), ctx); + } break; case BPF_ALU | BPF_MUL | BPF_K: case BPF_ALU64 | BPF_MUL | BPF_K: @@ -628,8 +644,13 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, goto emit_cond_jmp; case BPF_JMP | BPF_JSET | BPF_K: case BPF_JMP32 | BPF_JSET | BPF_K: - emit_a64_mov_i(is64, tmp, imm, ctx); - emit(A64_TST(is64, dst, tmp), ctx); + a64_insn = A64_TST_I(is64, dst, imm); + if (a64_insn != AARCH64_BREAK_FAULT) { + emit(a64_insn, ctx); + } else { + emit_a64_mov_i(is64, tmp, imm, ctx); + emit(A64_TST(is64, dst, tmp), ctx); + } goto emit_cond_jmp; /* function call */ case BPF_JMP | BPF_CALL: From patchwork Fri May 8 18:15:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luke Nelson X-Patchwork-Id: 1286388 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=cs.washington.edu Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=cs.washington.edu header.i=@cs.washington.edu header.a=rsa-sha256 header.s=goo201206 header.b=CCWu0Lup; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49Jdn610pXz9sT2 for ; Sat, 9 May 2020 04:16:10 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727904AbgEHSQI (ORCPT ); Fri, 8 May 2020 14:16:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1727890AbgEHSQG (ORCPT ); Fri, 8 May 2020 14:16:06 -0400 Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44AF0C061A0C for ; Fri, 8 May 2020 11:16:06 -0700 (PDT) Received: by mail-pg1-x544.google.com with SMTP id t11so1232543pgg.2 for ; Fri, 08 May 2020 11:16:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cs.washington.edu; s=goo201206; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=1RkJFS+t50tivJwSHzxWQpizFYGiyRAPi5B/W8OYM90=; b=CCWu0LupJQSLY09+5kTFGNYiHYCgaMr3aDRuRcSjVMD7wkRduR4zrHrWLKa53mVjjP 9DMnW8euiPBliMtpPia67Qi698Z94wMFIWcXHzXx/3KnrghW58Dl0By0ArPQMx0jRC4E xAxlH1UM5bTdYo8pmzNqUBGKcKRxGOFWRpDAk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=1RkJFS+t50tivJwSHzxWQpizFYGiyRAPi5B/W8OYM90=; b=Ee94vtlNhBIwH8nD13xD5GJj+EN/0PeM0GIlI5kL34o0IX3LUHZbKIfxpINjxW74ro dF/BqWZGHK+pkppqdbsSK066N+So1tIvbmZU3WCIvX6uMPeoNeXlh1nWhYtr1umNGiHS KXm61g1QRo57PgdTTJm+jtP5XeJiz/xZ35DBAjl+QNBW9EIJT3/Bmge58kZwJVwXOKoF mINbQ5l86crf3wM5/Vl581E9wZzJJ+VL0Uu2vSmGMRUsqYyDz1yWVZuEZ/x5vTshIR+k TcsPQ5BNLJJfV+Tfv+CqXpMcXt6FsR27IKPZqo0bF4bOJ+D/4BXqPxvEjEL+eeg50MjK B2VA== X-Gm-Message-State: AGi0PuZDZd6yDKWvwkGTSQ3AaB9kgW3GjZj/IX1YlSUsLT1Dc/4Mt/w0 hEBYu8rl9q0ROXefypbG1zn1mA== X-Google-Smtp-Source: APiQypKlpWm+EIgUVSCtoFz2t937EsJIcePxRHt5/hrRD9OkiuRJ1E5BYoVCYwVZF5C/mgLC1saWKA== X-Received: by 2002:aa7:951b:: with SMTP id b27mr4228455pfp.2.1588961765560; Fri, 08 May 2020 11:16:05 -0700 (PDT) Received: from localhost.localdomain (c-73-53-94-119.hsd1.wa.comcast.net. [73.53.94.119]) by smtp.gmail.com with ESMTPSA id e11sm2349463pfl.85.2020.05.08.11.16.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 May 2020 11:16:05 -0700 (PDT) From: Luke Nelson X-Google-Original-From: Luke Nelson To: bpf@vger.kernel.org Cc: Luke Nelson , Xi Wang , Catalin Marinas , Will Deacon , Daniel Borkmann , Alexei Starovoitov , Zi Shen Lim , Martin KaFai Lau , Song Liu , Yonghong Song , Andrii Nakryiko , John Fastabend , KP Singh , Mark Rutland , Alexios Zavras , Greg Kroah-Hartman , Allison Randal , Thomas Gleixner , Christoffer Dall , Marc Zyngier , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, clang-built-linux@googlegroups.com Subject: [PATCH bpf-next v2 3/3] bpf, arm64: Optimize ADD,SUB,JMP BPF_K using arm64 add/sub immediates Date: Fri, 8 May 2020 11:15:46 -0700 Message-Id: <20200508181547.24783-4-luke.r.nels@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200508181547.24783-1-luke.r.nels@gmail.com> References: <20200508181547.24783-1-luke.r.nels@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The current code for BPF_{ADD,SUB} BPF_K loads the BPF immediate to a temporary register before performing the addition/subtraction. Similarly, BPF_JMP BPF_K cases load the immediate to a temporary register before comparison. This patch introduces optimizations that use arm64 immediate add, sub, cmn, or cmp instructions when the BPF immediate fits. If the immediate does not fit, it falls back to using a temporary register. Example of generated code for BPF_ALU64_IMM(BPF_ADD, R0, 2): without optimization: 24: mov x10, #0x2 28: add x7, x7, x10 with optimization: 24: add x7, x7, #0x2 The code could use A64_{ADD,SUB}_I directly and check if it returns AARCH64_BREAK_FAULT, similar to how logical immediates are handled. However, aarch64_insn_gen_add_sub_imm from insn.c prints error messages when the immediate does not fit, and it's simpler to check if the immediate fits ahead of time. Co-developed-by: Xi Wang Signed-off-by: Xi Wang Signed-off-by: Luke Nelson Acked-by: Daniel Borkmann --- arch/arm64/net/bpf_jit.h | 8 ++++++++ arch/arm64/net/bpf_jit_comp.c | 36 +++++++++++++++++++++++++++++------ 2 files changed, 38 insertions(+), 6 deletions(-) diff --git a/arch/arm64/net/bpf_jit.h b/arch/arm64/net/bpf_jit.h index f36a779949e6..923ae7ff68c8 100644 --- a/arch/arm64/net/bpf_jit.h +++ b/arch/arm64/net/bpf_jit.h @@ -100,6 +100,14 @@ /* Rd = Rn OP imm12 */ #define A64_ADD_I(sf, Rd, Rn, imm12) A64_ADDSUB_IMM(sf, Rd, Rn, imm12, ADD) #define A64_SUB_I(sf, Rd, Rn, imm12) A64_ADDSUB_IMM(sf, Rd, Rn, imm12, SUB) +#define A64_ADDS_I(sf, Rd, Rn, imm12) \ + A64_ADDSUB_IMM(sf, Rd, Rn, imm12, ADD_SETFLAGS) +#define A64_SUBS_I(sf, Rd, Rn, imm12) \ + A64_ADDSUB_IMM(sf, Rd, Rn, imm12, SUB_SETFLAGS) +/* Rn + imm12; set condition flags */ +#define A64_CMN_I(sf, Rn, imm12) A64_ADDS_I(sf, A64_ZR, Rn, imm12) +/* Rn - imm12; set condition flags */ +#define A64_CMP_I(sf, Rn, imm12) A64_SUBS_I(sf, A64_ZR, Rn, imm12) /* Rd = Rn */ #define A64_MOV(sf, Rd, Rn) A64_ADD_I(sf, Rd, Rn, 0) diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 083e5d8a5e2c..561a2fea9cdd 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -167,6 +167,12 @@ static inline int epilogue_offset(const struct jit_ctx *ctx) return to - from; } +static bool is_addsub_imm(u32 imm) +{ + /* Either imm12 or shifted imm12. */ + return !(imm & ~0xfff) || !(imm & ~0xfff000); +} + /* Stack must be multiples of 16B */ #define STACK_ALIGN(sz) (((sz) + 15) & ~15) @@ -479,13 +485,25 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, /* dst = dst OP imm */ case BPF_ALU | BPF_ADD | BPF_K: case BPF_ALU64 | BPF_ADD | BPF_K: - emit_a64_mov_i(is64, tmp, imm, ctx); - emit(A64_ADD(is64, dst, dst, tmp), ctx); + if (is_addsub_imm(imm)) { + emit(A64_ADD_I(is64, dst, dst, imm), ctx); + } else if (is_addsub_imm(-imm)) { + emit(A64_SUB_I(is64, dst, dst, -imm), ctx); + } else { + emit_a64_mov_i(is64, tmp, imm, ctx); + emit(A64_ADD(is64, dst, dst, tmp), ctx); + } break; case BPF_ALU | BPF_SUB | BPF_K: case BPF_ALU64 | BPF_SUB | BPF_K: - emit_a64_mov_i(is64, tmp, imm, ctx); - emit(A64_SUB(is64, dst, dst, tmp), ctx); + if (is_addsub_imm(imm)) { + emit(A64_SUB_I(is64, dst, dst, imm), ctx); + } else if (is_addsub_imm(-imm)) { + emit(A64_ADD_I(is64, dst, dst, -imm), ctx); + } else { + emit_a64_mov_i(is64, tmp, imm, ctx); + emit(A64_SUB(is64, dst, dst, tmp), ctx); + } break; case BPF_ALU | BPF_AND | BPF_K: case BPF_ALU64 | BPF_AND | BPF_K: @@ -639,8 +657,14 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, case BPF_JMP32 | BPF_JSLT | BPF_K: case BPF_JMP32 | BPF_JSGE | BPF_K: case BPF_JMP32 | BPF_JSLE | BPF_K: - emit_a64_mov_i(is64, tmp, imm, ctx); - emit(A64_CMP(is64, dst, tmp), ctx); + if (is_addsub_imm(imm)) { + emit(A64_CMP_I(is64, dst, imm), ctx); + } else if (is_addsub_imm(-imm)) { + emit(A64_CMN_I(is64, dst, -imm), ctx); + } else { + emit_a64_mov_i(is64, tmp, imm, ctx); + emit(A64_CMP(is64, dst, tmp), ctx); + } goto emit_cond_jmp; case BPF_JMP | BPF_JSET | BPF_K: case BPF_JMP32 | BPF_JSET | BPF_K: