From patchwork Tue Apr 30 15:09:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shahab Vahedi X-Patchwork-Id: 1929669 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=12D6sHdX; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=vahedi.org header.i=@vahedi.org header.a=rsa-sha256 header.s=key1 header.b=YX48wroB; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4VTNt803Xzz20fY for ; Wed, 1 May 2024 01:10:24 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=PnfIws0tLRHoI/cm3f3ukdS9hD32o/VtXDQB15VSDCo=; b=12D6sHdXMUZ6Tb P4YCmJr1oNSCcGtvFDj6i33x6igDRzbI62SMxagPNZajo/jXyAo8E/vWOK3tNZkgR6DCMeT61qIO+ /xbefrWX7ksGdqRnsA05NJUjIEzRprfXOezQ7o6Iepv4apcBZN1vJkVH8xZ/0pHrEuwAO83kE6DEJ 5pSEAmb278E0LyDJBMeROKx6OrMmwjLoXdC5aqjT5wXqhrQjs5AXlZq11lamB/oiif6BxBbeFL4Lq 7pBsYWaFdOT7LVSSlehXWu88mJWbiTZtH5OFZvVv7+mIspIoFo9an3IIwHJiO9N/9U8W8xG6LDoKg 96ncQc9jQgX5TWwQ83xw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s1p7e-0000000706M-17PK; Tue, 30 Apr 2024 15:10:22 +0000 Received: from out-186.mta0.migadu.com ([2001:41d0:1004:224b::ba]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s1p7S-00000006zuY-3o5E for linux-snps-arc@lists.infradead.org; Tue, 30 Apr 2024 15:10:20 +0000 X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vahedi.org; s=key1; t=1714489807; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Q6O4rn8LEVEu5bXBOz4eYBHHtlqXHAQH3v2QTA0QKnc=; b=YX48wroB02hfc3aZWeqPMTiSluC9kxZHihviru3UYx8n7IWJ2TKrW8kOLq2ed4hvGRegPu W/e/YyXj4oRy1W2ORtSepB5L/gkTd2+mlqkGmOYwG7N7Ec6oScHbW0M9HScMvLxG0kwXZN 2HukXnA5Stm+6yQSPLn2LYAlIkX7daYBvB6i1G+KdYDpiB8O6F5AR4gxoVbF3JEcHs7rZi EasubzL7BoAjPeyHTFr62byGeAqxewVqWmQTCLqKF0zWD5E9gJILM57r1U/UcSxsEWCqEJ HiqHs1kf+rGK9tyyLRX19Nqsuzq3NoiSF0HEx8ASZVcXgEPs7RHt9m8QytgdOw== From: Shahab Vahedi To: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= Cc: Shahab Vahedi , Shahab Vahedi , Vineet Gupta , linux-snps-arc@lists.infradead.org Subject: [PATCH 1/7] v2: Turn "emit" from global into a context var Date: Tue, 30 Apr 2024 17:09:31 +0200 Message-Id: <20240430150937.39793-2-list+bpf@vahedi.org> In-Reply-To: <20240430150937.39793-1-list+bpf@vahedi.org> References: <20240430150937.39793-1-list+bpf@vahedi.org> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240430_081011_839256_A5A55579 X-CRM114-Status: GOOD ( 11.48 ) X-Spam-Score: -0.2 (/) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: From: Shahab Vahedi Plus an easter egg: Add "static" to do_{normal,extra}_pass() proto-type, so GCC won't complain about missing proto-type before invocation. --- arch/arc/net/bpf_jit.h | 14 +- arch/arc/net/bpf_jit_arcv2 [...] Content analysis details: (-0.2 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-snps-arc@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Linux on Synopsys ARC Processors List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-snps-arc" Errors-To: linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org From: Shahab Vahedi Plus an easter egg: Add "static" to do_{normal,extra}_pass() proto-type, so GCC won't complain about missing proto-type before invocation. --- arch/arc/net/bpf_jit.h | 14 +- arch/arc/net/bpf_jit_arcv2.c | 409 ++++++++++++++++++----------------- arch/arc/net/bpf_jit_core.c | 78 +++---- 3 files changed, 256 insertions(+), 245 deletions(-) diff --git a/arch/arc/net/bpf_jit.h b/arch/arc/net/bpf_jit.h index 5c8b9eb0ac81..ecad47b8b796 100644 --- a/arch/arc/net/bpf_jit.h +++ b/arch/arc/net/bpf_jit.h @@ -26,14 +26,18 @@ */ #define JIT_REG_TMP MAX_BPF_JIT_REG -/************* Globals that have effects on code generation ***********/ /* - * If "emit" is true, the instructions are actually generated. Else, the - * generation part will be skipped and only the length of instruction is - * returned by the responsible functions. + * Buffer access: If buffer "b" is not NULL, advance by "n" bytes. + * + * This macro must be used in any place that potentially requires a + * "buf + len". This way, we make sure that the "buf" argument for + * the underlying "arc_*(buf, ...)" ends up as NULL instead of something + * like "0+4" or "0+8", etc. Those "arc_*()" functions check their "buf" + * value to decide if instructions should be emitted or not. */ -extern bool emit; +#define BUF(b, n) (((b) != NULL) ? ((b) + (n)) : (b)) +/************* Globals that have effects on code generation ***********/ /* An indicator if zero-extend must be done for the 32-bit operations. */ extern bool zext_thyself; diff --git a/arch/arc/net/bpf_jit_arcv2.c b/arch/arc/net/bpf_jit_arcv2.c index 8de8fb19a8d0..b9e803f04a36 100644 --- a/arch/arc/net/bpf_jit_arcv2.c +++ b/arch/arc/net/bpf_jit_arcv2.c @@ -661,7 +661,7 @@ static u8 arc_movi_r(u8 *buf, u8 reg, s16 imm) { const u32 insn = OPC_MOVI | OP_B(reg) | MOVI_S12(imm); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -671,7 +671,7 @@ static u8 arc_mov_r(u8 *buf, u8 rd, u8 rs) { const u32 insn = OPC_MOV | OP_B(rd) | OP_C(rs); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -684,7 +684,7 @@ static u8 arc_mov_i(u8 *buf, u8 rd, s32 imm) if (IN_S12_RANGE(imm)) return arc_movi_r(buf, rd, imm); - if (emit) { + if (buf) { emit_4_bytes(buf, insn); emit_4_bytes(buf+INSN_len_normal, imm); } @@ -696,7 +696,7 @@ static u8 arc_mov_i_fixed(u8 *buf, u8 rd, s32 imm) { const u32 insn = OPC_MOV | OP_B(rd) | OP_IMM; - if (emit) { + if (buf) { emit_4_bytes(buf, insn); emit_4_bytes(buf+INSN_len_normal, imm); } @@ -708,7 +708,7 @@ static u8 arc_mov_cc_r(u8 *buf, u8 cc, u8 rd, u8 rs) { const u32 insn = OPC_MOV_CC | OP_B(rd) | OP_C(rs) | COND(cc); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -718,7 +718,7 @@ static u8 arc_movu_cc_r(u8 *buf, u8 cc, u8 rd, u8 imm) { const u32 insn = OPC_MOVU_CC | OP_B(rd) | OP_C(imm) | COND(cc); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -728,7 +728,7 @@ static u8 arc_sexb_r(u8 *buf, u8 rd, u8 rs) { const u32 insn = OPC_SEXB | OP_B(rd) | OP_C(rs); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -738,7 +738,7 @@ static u8 arc_sexh_r(u8 *buf, u8 rd, u8 rs) { const u32 insn = OPC_SEXH | OP_B(rd) | OP_C(rs); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -749,7 +749,7 @@ static u8 arc_st_r(u8 *buf, u8 reg, u8 reg_mem, s16 off, u8 zz) const u32 insn = OPC_STORE | STORE_ZZ(zz) | OP_C(reg) | OP_B(reg_mem) | STORE_S9(off); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -759,7 +759,7 @@ static u8 arc_push_r(u8 *buf, u8 reg) { const u32 insn = OPC_PUSH | OP_C(reg); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -770,7 +770,7 @@ static u8 arc_ld_r(u8 *buf, u8 reg, u8 reg_mem, s16 off, u8 zz) const u32 insn = OPC_LDU | LOAD_ZZ(zz) | LOAD_C(reg) | OP_B(reg_mem) | LOAD_S9(off); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -781,7 +781,7 @@ static u8 arc_ldx_r(u8 *buf, u8 reg, u8 reg_mem, s16 off, u8 zz) const u32 insn = OPC_LDS | LOAD_ZZ(zz) | LOAD_C(reg) | OP_B(reg_mem) | LOAD_S9(off); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -791,7 +791,7 @@ static u8 arc_pop_r(u8 *buf, u8 reg) { const u32 insn = OPC_POP | LOAD_C(reg); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -801,7 +801,7 @@ static u8 arc_add_r(u8 *buf, u8 ra, u8 rc) { const u32 insn = OPC_ADD | OP_A(ra) | OP_B(ra) | OP_C(rc); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -811,7 +811,7 @@ static u8 arc_addf_r(u8 *buf, u8 ra, u8 rc) { const u32 insn = OPC_ADDF | OP_A(ra) | OP_B(ra) | OP_C(rc); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -821,7 +821,7 @@ static u8 arc_addif_r(u8 *buf, u8 ra, u8 u6) { const u32 insn = OPC_ADDIF | OP_A(ra) | OP_B(ra) | ADDI_U6(u6); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -831,7 +831,7 @@ static u8 arc_addi_r(u8 *buf, u8 ra, u8 u6) { const u32 insn = OPC_ADDI | OP_A(ra) | OP_B(ra) | ADDI_U6(u6); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -841,7 +841,7 @@ static u8 arc_add_i(u8 *buf, u8 ra, u8 rb, s32 imm) { const u32 insn = OPC_ADD_I | OP_A(ra) | OP_B(rb); - if (emit) { + if (buf) { emit_4_bytes(buf, insn); emit_4_bytes(buf+INSN_len_normal, imm); } @@ -853,7 +853,7 @@ static u8 arc_adc_r(u8 *buf, u8 ra, u8 rc) { const u32 insn = OPC_ADC | OP_A(ra) | OP_B(ra) | OP_C(rc); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -863,7 +863,7 @@ static u8 arc_adci_r(u8 *buf, u8 ra, u8 u6) { const u32 insn = OPC_ADCI | OP_A(ra) | OP_B(ra) | ADCI_U6(u6); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -873,7 +873,7 @@ static u8 arc_sub_r(u8 *buf, u8 ra, u8 rc) { const u32 insn = OPC_SUB | OP_A(ra) | OP_B(ra) | OP_C(rc); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -883,7 +883,7 @@ static u8 arc_subf_r(u8 *buf, u8 ra, u8 rc) { const u32 insn = OPC_SUBF | OP_A(ra) | OP_B(ra) | OP_C(rc); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -893,7 +893,7 @@ static u8 arc_subi_r(u8 *buf, u8 ra, u8 u6) { const u32 insn = OPC_SUBI | OP_A(ra) | OP_B(ra) | SUBI_U6(u6); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -903,7 +903,7 @@ static u8 arc_sub_i(u8 *buf, u8 ra, s32 imm) { const u32 insn = OPC_SUB_I | OP_A(ra) | OP_B(ra); - if (emit) { + if (buf) { emit_4_bytes(buf, insn); emit_4_bytes(buf+INSN_len_normal, imm); } @@ -915,7 +915,7 @@ static u8 arc_sbc_r(u8 *buf, u8 ra, u8 rc) { const u32 insn = OPC_SBC | OP_A(ra) | OP_B(ra) | OP_C(rc); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -925,7 +925,7 @@ static u8 arc_cmp_r(u8 *buf, u8 rb, u8 rc) { const u32 insn = OPC_CMP | OP_B(rb) | OP_C(rc); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -942,7 +942,7 @@ static u8 arc_cmpz_r(u8 *buf, u8 rb, u8 rc) { const u32 insn = OPC_CMP | OP_B(rb) | OP_C(rc) | CC_equal; - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -952,7 +952,7 @@ static u8 arc_neg_r(u8 *buf, u8 ra, u8 rb) { const u32 insn = OPC_NEG | OP_A(ra) | OP_B(rb); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -962,7 +962,7 @@ static u8 arc_mpy_r(u8 *buf, u8 ra, u8 rb, u8 rc) { const u32 insn = OPC_MPY | OP_A(ra) | OP_B(rb) | OP_C(rc); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -972,7 +972,7 @@ static u8 arc_mpy_i(u8 *buf, u8 ra, u8 rb, s32 imm) { const u32 insn = OPC_MPYI | OP_A(ra) | OP_B(rb); - if (emit) { + if (buf) { emit_4_bytes(buf, insn); emit_4_bytes(buf+INSN_len_normal, imm); } @@ -984,7 +984,7 @@ static u8 arc_mpydu_r(u8 *buf, u8 ra, u8 rc) { const u32 insn = OPC_MPYDU | OP_A(ra) | OP_B(ra) | OP_C(rc); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -994,7 +994,7 @@ static u8 arc_mpydu_i(u8 *buf, u8 ra, s32 imm) { const u32 insn = OPC_MPYDUI | OP_A(ra) | OP_B(ra); - if (emit) { + if (buf) { emit_4_bytes(buf, insn); emit_4_bytes(buf+INSN_len_normal, imm); } @@ -1006,7 +1006,7 @@ static u8 arc_divu_r(u8 *buf, u8 rd, u8 rs) { const u32 insn = OPC_DIVU | OP_A(rd) | OP_B(rd) | OP_C(rs); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1016,7 +1016,7 @@ static u8 arc_divu_i(u8 *buf, u8 rd, s32 imm) { const u32 insn = OPC_DIVUI | OP_A(rd) | OP_B(rd); - if (emit) { + if (buf) { emit_4_bytes(buf, insn); emit_4_bytes(buf+INSN_len_normal, imm); } @@ -1028,7 +1028,7 @@ static u8 arc_divs_r(u8 *buf, u8 rd, u8 rs) { const u32 insn = OPC_DIVS | OP_A(rd) | OP_B(rd) | OP_C(rs); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1038,7 +1038,7 @@ static u8 arc_divs_i(u8 *buf, u8 rd, s32 imm) { const u32 insn = OPC_DIVSI | OP_A(rd) | OP_B(rd); - if (emit) { + if (buf) { emit_4_bytes(buf, insn); emit_4_bytes(buf+INSN_len_normal, imm); } @@ -1050,7 +1050,7 @@ static u8 arc_remu_r(u8 *buf, u8 rd, u8 rs) { const u32 insn = OPC_REMU | OP_A(rd) | OP_B(rd) | OP_C(rs); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1060,7 +1060,7 @@ static u8 arc_remu_i(u8 *buf, u8 rd, s32 imm) { const u32 insn = OPC_REMUI | OP_A(rd) | OP_B(rd); - if (emit) { + if (buf) { emit_4_bytes(buf, insn); emit_4_bytes(buf+INSN_len_normal, imm); } @@ -1072,7 +1072,7 @@ static u8 arc_rems_r(u8 *buf, u8 rd, u8 rs) { const u32 insn = OPC_REMS | OP_A(rd) | OP_B(rd) | OP_C(rs); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1082,7 +1082,7 @@ static u8 arc_rems_i(u8 *buf, u8 rd, s32 imm) { const u32 insn = OPC_REMSI | OP_A(rd) | OP_B(rd); - if (emit) { + if (buf) { emit_4_bytes(buf, insn); emit_4_bytes(buf+INSN_len_normal, imm); } @@ -1094,7 +1094,7 @@ static u8 arc_and_r(u8 *buf, u8 rd, u8 rs) { const u32 insn = OPC_AND | OP_A(rd) | OP_B(rd) | OP_C(rs); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1104,7 +1104,7 @@ static u8 arc_and_i(u8 *buf, u8 rd, s32 imm) { const u32 insn = OPC_ANDI | OP_A(rd) | OP_B(rd); - if (emit) { + if (buf) { emit_4_bytes(buf, insn); emit_4_bytes(buf+INSN_len_normal, imm); } @@ -1116,7 +1116,7 @@ static u8 arc_tst_r(u8 *buf, u8 rd, u8 rs) { const u32 insn = OPC_TST | OP_B(rd) | OP_C(rs); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1131,7 +1131,7 @@ static u8 arc_tstz_r(u8 *buf, u8 rd, u8 rs) { const u32 insn = OPC_TST | OP_B(rd) | OP_C(rs) | CC_equal; - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1140,7 +1140,7 @@ static u8 arc_or_r(u8 *buf, u8 rd, u8 rs1, u8 rs2) { const u32 insn = OPC_OR | OP_A(rd) | OP_B(rs1) | OP_C(rs2); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1149,7 +1149,7 @@ static u8 arc_or_i(u8 *buf, u8 rd, s32 imm) { const u32 insn = OPC_ORI | OP_A(rd) | OP_B(rd); - if (emit) { + if (buf) { emit_4_bytes(buf, insn); emit_4_bytes(buf+INSN_len_normal, imm); } @@ -1160,7 +1160,7 @@ static u8 arc_xor_r(u8 *buf, u8 rd, u8 rs) { const u32 insn = OPC_XOR | OP_A(rd) | OP_B(rd) | OP_C(rs); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1169,7 +1169,7 @@ static u8 arc_xor_i(u8 *buf, u8 rd, s32 imm) { const u32 insn = OPC_XORI | OP_A(rd) | OP_B(rd); - if (emit) { + if (buf) { emit_4_bytes(buf, insn); emit_4_bytes(buf+INSN_len_normal, imm); } @@ -1180,7 +1180,7 @@ static u8 arc_not_r(u8 *buf, u8 rd, u8 rs) { const u32 insn = OPC_NOT | OP_B(rd) | OP_C(rs); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1189,7 +1189,7 @@ static u8 arc_btst_i(u8 *buf, u8 rs, u8 imm) { const u32 insn = OPC_BTSTU6 | OP_B(rs) | BTST_U6(imm); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1198,7 +1198,7 @@ static u8 arc_asl_r(u8 *buf, u8 rd, u8 rs1, u8 rs2) { const u32 insn = OPC_ASL | OP_A(rd) | OP_B(rs1) | OP_C(rs2); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1207,7 +1207,7 @@ static u8 arc_asli_r(u8 *buf, u8 rd, u8 rs, u8 imm) { const u32 insn = OPC_ASLI | OP_A(rd) | OP_B(rs) | ASLI_U6(imm); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1216,7 +1216,7 @@ static u8 arc_asr_r(u8 *buf, u8 rd, u8 rs1, u8 rs2) { const u32 insn = OPC_ASR | OP_A(rd) | OP_B(rs1) | OP_C(rs2); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1225,7 +1225,7 @@ static u8 arc_asri_r(u8 *buf, u8 rd, u8 rs, u8 imm) { const u32 insn = OPC_ASRI | OP_A(rd) | OP_B(rs) | ASRI_U6(imm); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1234,7 +1234,7 @@ static u8 arc_lsr_r(u8 *buf, u8 rd, u8 rs1, u8 rs2) { const u32 insn = OPC_LSR | OP_A(rd) | OP_B(rs1) | OP_C(rs2); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1243,7 +1243,7 @@ static u8 arc_lsri_r(u8 *buf, u8 rd, u8 rs, u8 imm) { const u32 insn = OPC_LSRI | OP_A(rd) | OP_B(rs) | LSRI_U6(imm); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1252,14 +1252,14 @@ static u8 arc_swape_r(u8 *buf, u8 r) { const u32 insn = OPC_SWAPE | OP_B(r) | OP_C(r); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } static u8 arc_jmp_return(u8 *buf) { - if (emit) + if (buf) emit_4_bytes(buf, OPC_J_BLINK); return INSN_len_normal; } @@ -1268,7 +1268,7 @@ static u8 arc_jl(u8 *buf, u8 reg) { const u32 insn = OPC_JL | OP_C(reg); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1282,7 +1282,7 @@ static u8 arc_bcc(u8 *buf, u8 cc, int offset) { const u32 insn = OPC_BCC | BCC_S21(offset) | COND(cc); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1296,7 +1296,7 @@ static u8 arc_b(u8 *buf, s32 offset) { const u32 insn = OPC_B | B_S25(offset); - if (emit) + if (buf) emit_4_bytes(buf, insn); return INSN_len_normal; } @@ -1348,8 +1348,10 @@ u8 mov_r64(u8 *buf, u8 rd, u8 rs, u8 sign_ext) len = mov_r32(buf, rd, rs, sign_ext); /* Now propagate the sign bit of LO to HI. */ - if (sign_ext == 8 || sign_ext == 16 || sign_ext == 32) - len += arc_asri_r(buf+len, REG_HI(rd), REG_LO(rd), 31); + if (sign_ext == 8 || sign_ext == 16 || sign_ext == 32) { + len += arc_asri_r(BUF(buf, len), + REG_HI(rd), REG_LO(rd), 31); + } return len; } @@ -1362,10 +1364,10 @@ u8 mov_r64(u8 *buf, u8 rd, u8 rs, u8 sign_ext) len = arc_mov_r(buf, REG_LO(rd), REG_LO(rs)); if (rs != BPF_REG_FP) - len += arc_mov_r(buf+len, REG_HI(rd), REG_HI(rs)); + len += arc_mov_r(BUF(buf, len), REG_HI(rd), REG_HI(rs)); /* BPF_REG_FP is mapped to 32-bit "fp" register. */ else - len += arc_movi_r(buf+len, REG_HI(rd), 0); + len += arc_movi_r(BUF(buf, len), REG_HI(rd), 0); return len; } @@ -1380,9 +1382,9 @@ u8 mov_r64_i32(u8 *buf, u8 reg, s32 imm) /* BPF_REG_FP is mapped to 32-bit "fp" register. */ if (reg != BPF_REG_FP) { if (imm >= 0) - len += arc_movi_r(buf+len, REG_HI(reg), 0); + len += arc_movi_r(BUF(buf, len), REG_HI(reg), 0); else - len += arc_movi_r(buf+len, REG_HI(reg), -1); + len += arc_movi_r(BUF(buf, len), REG_HI(reg), -1); } return len; @@ -1420,7 +1422,7 @@ u8 mov_r64_i64(u8 *buf, u8 reg, u32 lo, u32 hi) u8 len; len = arc_mov_i_fixed(buf, REG_LO(reg), lo); - len += arc_mov_i_fixed(buf+len, REG_HI(reg), hi); + len += arc_mov_i_fixed(BUF(buf, len), REG_HI(reg), hi); return len; } @@ -1446,7 +1448,7 @@ static u8 adjust_mem_access(u8 *buf, s16 *off, u8 size, if (!IN_S9_RANGE(*off) || (size == BPF_DW && !IN_S9_RANGE(*off + 4))) { - len += arc_add_i(buf+len, + len += arc_add_i(BUF(buf, len), REG_LO(JIT_REG_TMP), REG_LO(rm), (u32) (*off)); *arc_reg_mem = REG_LO(JIT_REG_TMP); *off = 0; @@ -1463,14 +1465,15 @@ u8 store_r(u8 *buf, u8 rs, u8 rd, s16 off, u8 size) len = adjust_mem_access(buf, &off, size, rd, &arc_reg_mem); if (size == BPF_DW) { - len += arc_st_r(buf+len, REG_LO(rs), arc_reg_mem, off, - ZZ_4_byte); - len += arc_st_r(buf+len, REG_HI(rs), arc_reg_mem, off+4, - ZZ_4_byte); + len += arc_st_r(BUF(buf, len), REG_LO(rs), arc_reg_mem, + off, ZZ_4_byte); + len += arc_st_r(BUF(buf, len), REG_HI(rs), arc_reg_mem, + off+4, ZZ_4_byte); } else { u8 zz = bpf_to_arc_size(size); - len += arc_st_r(buf+len, REG_LO(rs), arc_reg_mem, off, zz); + len += arc_st_r(BUF(buf, len), REG_LO(rs), arc_reg_mem, + off, zz); } return len; @@ -1495,18 +1498,18 @@ u8 store_i(u8 *buf, s32 imm, u8 rd, s16 off, u8 size) len = adjust_mem_access(buf, &off, size, rd, &arc_reg_mem); if (size == BPF_DW) { - len += arc_mov_i(buf+len, arc_rs, imm); - len += arc_st_r(buf+len, arc_rs, arc_reg_mem, off, - ZZ_4_byte); + len += arc_mov_i(BUF(buf, len), arc_rs, imm); + len += arc_st_r(BUF(buf, len), arc_rs, arc_reg_mem, + off, ZZ_4_byte); imm = (imm >= 0 ? 0 : -1); - len += arc_mov_i(buf+len, arc_rs, imm); - len += arc_st_r(buf+len, arc_rs, arc_reg_mem, off+4, - ZZ_4_byte); + len += arc_mov_i(BUF(buf, len), arc_rs, imm); + len += arc_st_r(BUF(buf, len), arc_rs, arc_reg_mem, + off+4, ZZ_4_byte); } else { u8 zz = bpf_to_arc_size(size); - len += arc_mov_i(buf+len, arc_rs, imm); - len += arc_st_r(buf+len, arc_rs, arc_reg_mem, off, zz); + len += arc_mov_i(BUF(buf, len), arc_rs, imm); + len += arc_st_r(BUF(buf, len), arc_rs, arc_reg_mem, off, zz); } return len; @@ -1523,12 +1526,12 @@ static u8 push_r64(u8 *buf, u8 reg) #ifdef __LITTLE_ENDIAN /* BPF_REG_FP is mapped to 32-bit "fp" register. */ if (reg != BPF_REG_FP) - len += arc_push_r(buf+len, REG_HI(reg)); - len += arc_push_r(buf+len, REG_LO(reg)); + len += arc_push_r(BUF(buf, len), REG_HI(reg)); + len += arc_push_r(BUF(buf, len), REG_LO(reg)); #else - len += arc_push_r(buf+len, REG_LO(reg)); + len += arc_push_r(BUF(buf, len), REG_LO(reg)); if (reg != BPF_REG_FP) - len += arc_push_r(buf+len, REG_HI(reg)); + len += arc_push_r(BUF(buf, len), REG_HI(reg)); #endif return len; @@ -1546,18 +1549,19 @@ u8 load_r(u8 *buf, u8 rd, u8 rs, s16 off, u8 size, bool sign_ext) /* Use LD.X only if the data size is less than 32-bit. */ if (sign_ext && (zz == ZZ_1_byte || zz == ZZ_2_byte)) { - len += arc_ldx_r(buf+len, REG_LO(rd), arc_reg_mem, - off, zz); + len += arc_ldx_r(BUF(buf, len), REG_LO(rd), + arc_reg_mem, off, zz); } else { - len += arc_ld_r(buf+len, REG_LO(rd), arc_reg_mem, - off, zz); + len += arc_ld_r(BUF(buf, len), REG_LO(rd), + arc_reg_mem, off, zz); } if (sign_ext) { /* Propagate the sign bit to the higher reg. */ - len += arc_asri_r(buf+len, REG_HI(rd), REG_LO(rd), 31); + len += arc_asri_r(BUF(buf, len), + REG_HI(rd), REG_LO(rd), 31); } else { - len += arc_movi_r(buf+len, REG_HI(rd), 0); + len += arc_movi_r(BUF(buf, len), REG_HI(rd), 0); } } else if (size == BPF_DW) { /* @@ -1574,14 +1578,14 @@ u8 load_r(u8 *buf, u8 rd, u8 rs, s16 off, u8 size, bool sign_ext) * ld rx, [rb, off+0] */ if (REG_LO(rd) != arc_reg_mem) { - len += arc_ld_r(buf+len, REG_LO(rd), arc_reg_mem, + len += arc_ld_r(BUF(buf, len), REG_LO(rd), arc_reg_mem, off+0, ZZ_4_byte); - len += arc_ld_r(buf+len, REG_HI(rd), arc_reg_mem, + len += arc_ld_r(BUF(buf, len), REG_HI(rd), arc_reg_mem, off+4, ZZ_4_byte); } else { - len += arc_ld_r(buf+len, REG_HI(rd), arc_reg_mem, + len += arc_ld_r(BUF(buf, len), REG_HI(rd), arc_reg_mem, off+4, ZZ_4_byte); - len += arc_ld_r(buf+len, REG_LO(rd), arc_reg_mem, + len += arc_ld_r(BUF(buf, len), REG_LO(rd), arc_reg_mem, off+0, ZZ_4_byte); } } @@ -1607,7 +1611,7 @@ u8 add_r64(u8 *buf, u8 rd, u8 rs) u8 len; len = arc_addf_r(buf, REG_LO(rd), REG_LO(rs)); - len += arc_adc_r(buf+len, REG_HI(rd), REG_HI(rs)); + len += arc_adc_r(BUF(buf, len), REG_HI(rd), REG_HI(rs)); return len; } @@ -1617,10 +1621,10 @@ u8 add_r64_i32(u8 *buf, u8 rd, s32 imm) if (IN_U6_RANGE(imm)) { len = arc_addif_r(buf, REG_LO(rd), imm); - len += arc_adci_r(buf+len, REG_HI(rd), 0); + len += arc_adci_r(BUF(buf, len), REG_HI(rd), 0); } else { len = mov_r64_i32(buf, JIT_REG_TMP, imm); - len += add_r64(buf+len, rd, JIT_REG_TMP); + len += add_r64(BUF(buf, len), rd, JIT_REG_TMP); } return len; } @@ -1643,7 +1647,7 @@ u8 sub_r64(u8 *buf, u8 rd, u8 rs) u8 len; len = arc_subf_r(buf, REG_LO(rd), REG_LO(rs)); - len += arc_sbc_r(buf+len, REG_HI(rd), REG_HI(rs)); + len += arc_sbc_r(BUF(buf, len), REG_HI(rd), REG_HI(rs)); return len; } @@ -1652,7 +1656,7 @@ u8 sub_r64_i32(u8 *buf, u8 rd, s32 imm) u8 len; len = mov_r64_i32(buf, JIT_REG_TMP, imm); - len += sub_r64(buf+len, rd, JIT_REG_TMP); + len += sub_r64(BUF(buf, len), rd, JIT_REG_TMP); return len; } @@ -1672,8 +1676,8 @@ u8 neg_r64(u8 *buf, u8 r) u8 len; len = arc_not_r(buf, REG_LO(r), REG_LO(r)); - len += arc_not_r(buf+len, REG_HI(r), REG_HI(r)); - len += add_r64_i32(buf+len, r, 1); + len += arc_not_r(BUF(buf, len), REG_HI(r), REG_HI(r)); + len += add_r64_i32(BUF(buf, len), r, 1); return len; } @@ -1707,10 +1711,10 @@ u8 mul_r64(u8 *buf, u8 rd, u8 rs) u8 len; len = arc_mpy_r(buf, t0, B_hi, C_lo); - len += arc_mpy_r(buf+len, t1, B_lo, C_hi); - len += arc_mpydu_r(buf+len, B_lo, C_lo); - len += arc_add_r(buf+len, B_hi, t0); - len += arc_add_r(buf+len, B_hi, t1); + len += arc_mpy_r(BUF(buf, len), t1, B_lo, C_hi); + len += arc_mpydu_r(BUF(buf, len), B_lo, C_lo); + len += arc_add_r(BUF(buf, len), B_hi, t0); + len += arc_add_r(BUF(buf, len), B_hi, t1); return len; } @@ -1755,15 +1759,15 @@ u8 mul_r64_i32(u8 *buf, u8 rd, s32 imm) /* Is the sign-extension of the immediate "-1"? */ if (imm < 0) - len += arc_neg_r(buf+len, t1, B_lo); + len += arc_neg_r(BUF(buf, len), t1, B_lo); - len += arc_mpy_i(buf+len, t0, B_hi, imm); - len += arc_mpydu_i(buf+len, B_lo, imm); - len += arc_add_r(buf+len, B_hi, t0); + len += arc_mpy_i(BUF(buf, len), t0, B_hi, imm); + len += arc_mpydu_i(BUF(buf, len), B_lo, imm); + len += arc_add_r(BUF(buf, len), B_hi, t0); /* Add the "sign*B_lo" part, if necessary. */ if (imm < 0) - len += arc_add_r(buf+len, B_hi, t1); + len += arc_add_r(BUF(buf, len), B_hi, t1); return len; } @@ -1820,8 +1824,8 @@ u8 and_r64(u8 *buf, u8 rd, u8 rs) { u8 len; - len = arc_and_r(buf, REG_LO(rd), REG_LO(rs)); - len += arc_and_r(buf+len, REG_HI(rd), REG_HI(rs)); + len = arc_and_r(buf, REG_LO(rd), REG_LO(rs)); + len += arc_and_r(BUF(buf, len), REG_HI(rd), REG_HI(rs)); return len; } @@ -1830,7 +1834,7 @@ u8 and_r64_i32(u8 *buf, u8 rd, s32 imm) u8 len; len = mov_r64_i32(buf, JIT_REG_TMP, imm); - len += and_r64(buf+len, rd, JIT_REG_TMP); + len += and_r64(BUF(buf, len), rd, JIT_REG_TMP); return len; } @@ -1853,8 +1857,8 @@ u8 or_r64(u8 *buf, u8 rd, u8 rs) { u8 len; - len = arc_or_r(buf, REG_LO(rd), REG_LO(rd), REG_LO(rs)); - len += arc_or_r(buf+len, REG_HI(rd), REG_HI(rd), REG_HI(rs)); + len = arc_or_r(buf, REG_LO(rd), REG_LO(rd), REG_LO(rs)); + len += arc_or_r(BUF(buf, len), REG_HI(rd), REG_HI(rd), REG_HI(rs)); return len; } @@ -1863,7 +1867,7 @@ u8 or_r64_i32(u8 *buf, u8 rd, s32 imm) u8 len; len = mov_r64_i32(buf, JIT_REG_TMP, imm); - len += or_r64(buf+len, rd, JIT_REG_TMP); + len += or_r64(BUF(buf, len), rd, JIT_REG_TMP); return len; } @@ -1881,8 +1885,8 @@ u8 xor_r64(u8 *buf, u8 rd, u8 rs) { u8 len; - len = arc_xor_r(buf, REG_LO(rd), REG_LO(rs)); - len += arc_xor_r(buf+len, REG_HI(rd), REG_HI(rs)); + len = arc_xor_r(buf, REG_LO(rd), REG_LO(rs)); + len += arc_xor_r(BUF(buf, len), REG_HI(rd), REG_HI(rs)); return len; } @@ -1891,7 +1895,7 @@ u8 xor_r64_i32(u8 *buf, u8 rd, s32 imm) u8 len; len = mov_r64_i32(buf, JIT_REG_TMP, imm); - len += xor_r64(buf+len, rd, JIT_REG_TMP); + len += xor_r64(BUF(buf, len), rd, JIT_REG_TMP); return len; } @@ -1952,15 +1956,15 @@ u8 lsh_r64(u8 *buf, u8 rd, u8 rs) u8 len; len = arc_not_r(buf, t0, C_lo); - len += arc_lsri_r(buf+len, t1, B_lo, 1); - len += arc_lsr_r(buf+len, t1, t1, t0); - len += arc_mov_r(buf+len, t0, C_lo); - len += arc_asl_r(buf+len, B_lo, B_lo, t0); - len += arc_asl_r(buf+len, B_hi, B_hi, t0); - len += arc_or_r(buf+len, B_hi, B_hi, t1); - len += arc_btst_i(buf+len, t0, 5); - len += arc_mov_cc_r(buf+len, CC_unequal, B_hi, B_lo); - len += arc_movu_cc_r(buf+len, CC_unequal, B_lo, 0); + len += arc_lsri_r(BUF(buf, len), t1, B_lo, 1); + len += arc_lsr_r(BUF(buf, len), t1, t1, t0); + len += arc_mov_r(BUF(buf, len), t0, C_lo); + len += arc_asl_r(BUF(buf, len), B_lo, B_lo, t0); + len += arc_asl_r(BUF(buf, len), B_hi, B_hi, t0); + len += arc_or_r(BUF(buf, len), B_hi, B_hi, t1); + len += arc_btst_i(BUF(buf, len), t0, 5); + len += arc_mov_cc_r(BUF(buf, len), CC_unequal, B_hi, B_lo); + len += arc_movu_cc_r(BUF(buf, len), CC_unequal, B_lo, 0); return len; } @@ -1987,12 +1991,12 @@ u8 lsh_r64_i32(u8 *buf, u8 rd, s32 imm) return 0; } else if (n <= 31) { len = arc_lsri_r(buf, t0, B_lo, 32 - n); - len += arc_asli_r(buf+len, B_lo, B_lo, n); - len += arc_asli_r(buf+len, B_hi, B_hi, n); - len += arc_or_r(buf+len, B_hi, B_hi, t0); + len += arc_asli_r(BUF(buf, len), B_lo, B_lo, n); + len += arc_asli_r(BUF(buf, len), B_hi, B_hi, n); + len += arc_or_r(BUF(buf, len), B_hi, B_hi, t0); } else if (n <= 63) { len = arc_asli_r(buf, B_hi, B_lo, n - 32); - len += arc_movi_r(buf+len, B_lo, 0); + len += arc_movi_r(BUF(buf, len), B_lo, 0); } /* n >= 64 is undefined behaviour. */ @@ -2047,15 +2051,15 @@ u8 rsh_r64(u8 *buf, u8 rd, u8 rs) u8 len; len = arc_not_r(buf, t0, C_lo); - len += arc_asli_r(buf+len, t1, B_hi, 1); - len += arc_asl_r(buf+len, t1, t1, t0); - len += arc_mov_r(buf+len, t0, C_lo); - len += arc_lsr_r(buf+len, B_hi, B_hi, t0); - len += arc_lsr_r(buf+len, B_lo, B_lo, t0); - len += arc_or_r(buf+len, B_lo, B_lo, t1); - len += arc_btst_i(buf+len, t0, 5); - len += arc_mov_cc_r(buf+len, CC_unequal, B_lo, B_hi); - len += arc_movu_cc_r(buf+len, CC_unequal, B_hi, 0); + len += arc_asli_r(BUF(buf, len), t1, B_hi, 1); + len += arc_asl_r(BUF(buf, len), t1, t1, t0); + len += arc_mov_r(BUF(buf, len), t0, C_lo); + len += arc_lsr_r(BUF(buf, len), B_hi, B_hi, t0); + len += arc_lsr_r(BUF(buf, len), B_lo, B_lo, t0); + len += arc_or_r(BUF(buf, len), B_lo, B_lo, t1); + len += arc_btst_i(BUF(buf, len), t0, 5); + len += arc_mov_cc_r(BUF(buf, len), CC_unequal, B_lo, B_hi); + len += arc_movu_cc_r(BUF(buf, len), CC_unequal, B_hi, 0); return len; } @@ -2082,12 +2086,12 @@ u8 rsh_r64_i32(u8 *buf, u8 rd, s32 imm) return 0; } else if (n <= 31) { len = arc_asli_r(buf, t0, B_hi, 32 - n); - len += arc_lsri_r(buf+len, B_lo, B_lo, n); - len += arc_lsri_r(buf+len, B_hi, B_hi, n); - len += arc_or_r(buf+len, B_lo, B_lo, t0); + len += arc_lsri_r(BUF(buf, len), B_lo, B_lo, n); + len += arc_lsri_r(BUF(buf, len), B_hi, B_hi, n); + len += arc_or_r(BUF(buf, len), B_lo, B_lo, t0); } else if (n <= 63) { len = arc_lsri_r(buf, B_lo, B_hi, n - 32); - len += arc_movi_r(buf+len, B_hi, 0); + len += arc_movi_r(BUF(buf, len), B_hi, 0); } /* n >= 64 is undefined behaviour. */ @@ -2144,16 +2148,16 @@ u8 arsh_r64(u8 *buf, u8 rd, u8 rs) u8 len; len = arc_not_r(buf, t0, C_lo); - len += arc_asli_r(buf+len, t1, B_hi, 1); - len += arc_asl_r(buf+len, t1, t1, t0); - len += arc_mov_r(buf+len, t0, C_lo); - len += arc_asr_r(buf+len, B_hi, B_hi, t0); - len += arc_lsr_r(buf+len, B_lo, B_lo, t0); - len += arc_or_r(buf+len, B_lo, B_lo, t1); - len += arc_btst_i(buf+len, t0, 5); - len += arc_asri_r(buf+len, t0, B_hi, 31); - len += arc_mov_cc_r(buf+len, CC_unequal, B_lo, B_hi); - len += arc_mov_cc_r(buf+len, CC_unequal, B_hi, t0); + len += arc_asli_r(BUF(buf, len), t1, B_hi, 1); + len += arc_asl_r(BUF(buf, len), t1, t1, t0); + len += arc_mov_r(BUF(buf, len), t0, C_lo); + len += arc_asr_r(BUF(buf, len), B_hi, B_hi, t0); + len += arc_lsr_r(BUF(buf, len), B_lo, B_lo, t0); + len += arc_or_r(BUF(buf, len), B_lo, B_lo, t1); + len += arc_btst_i(BUF(buf, len), t0, 5); + len += arc_asri_r(BUF(buf, len), t0, B_hi, 31); + len += arc_mov_cc_r(BUF(buf, len), CC_unequal, B_lo, B_hi); + len += arc_mov_cc_r(BUF(buf, len), CC_unequal, B_hi, t0); return len; } @@ -2180,14 +2184,14 @@ u8 arsh_r64_i32(u8 *buf, u8 rd, s32 imm) return 0; } else if (n <= 31) { len = arc_asli_r(buf, t0, B_hi, 32 - n); - len += arc_lsri_r(buf+len, B_lo, B_lo, n); - len += arc_asri_r(buf+len, B_hi, B_hi, n); - len += arc_or_r(buf+len, B_lo, B_lo, t0); + len += arc_lsri_r(BUF(buf, len), B_lo, B_lo, n); + len += arc_asri_r(BUF(buf, len), B_hi, B_hi, n); + len += arc_or_r(BUF(buf, len), B_lo, B_lo, t0); } else if (n <= 63) { len = arc_asri_r(buf, B_lo, B_hi, n - 32); - len += arc_movi_r(buf+len, B_hi, -1); - len += arc_btst_i(buf+len, B_lo, 31); - len += arc_movu_cc_r(buf+len, CC_equal, B_hi, 0); + len += arc_movi_r(BUF(buf, len), B_hi, -1); + len += arc_btst_i(BUF(buf, len), B_lo, 31); + len += arc_movu_cc_r(BUF(buf, len), CC_equal, B_hi, 0); } /* n >= 64 is undefined behaviour. */ @@ -2209,10 +2213,10 @@ u8 gen_swap(u8 *buf, u8 rd, u8 size, u8 endian, bool force) if ((force == false) && (host_endian == endian)) { switch (size) { case 16: - len += arc_and_i(buf+len, REG_LO(rd), 0xffff); + len += arc_and_i(BUF(buf, len), REG_LO(rd), 0xffff); fallthrough; case 32: - len += zext(buf+len, rd); + len += zext(BUF(buf, len), rd); fallthrough; case 64: break; @@ -2226,11 +2230,12 @@ u8 gen_swap(u8 *buf, u8 rd, u8 size, u8 endian, bool force) * r = B4B3_B2B1 << 16 --> r = B2B1_0000 * swape(r) is 0000_B1B2 */ - len += arc_asli_r(buf+len, REG_LO(rd), REG_LO(rd), 16); + len += arc_asli_r(BUF(buf, len), + REG_LO(rd), REG_LO(rd), 16); fallthrough; case 32: - len += arc_swape_r(buf+len, REG_LO(rd)); - len += zext(buf+len, rd); + len += arc_swape_r(BUF(buf, len), REG_LO(rd)); + len += zext(BUF(buf, len), rd); break; case 64: /* @@ -2240,11 +2245,11 @@ u8 gen_swap(u8 *buf, u8 rd, u8 size, u8 endian, bool force) * hi ^= lo; * and then swap the bytes in "hi" and "lo". */ - len += arc_xor_r(buf+len, REG_HI(rd), REG_LO(rd)); - len += arc_xor_r(buf+len, REG_LO(rd), REG_HI(rd)); - len += arc_xor_r(buf+len, REG_HI(rd), REG_LO(rd)); - len += arc_swape_r(buf+len, REG_LO(rd)); - len += arc_swape_r(buf+len, REG_HI(rd)); + len += arc_xor_r(BUF(buf, len), REG_HI(rd), REG_LO(rd)); + len += arc_xor_r(BUF(buf, len), REG_LO(rd), REG_HI(rd)); + len += arc_xor_r(BUF(buf, len), REG_HI(rd), REG_LO(rd)); + len += arc_swape_r(BUF(buf, len), REG_LO(rd)); + len += arc_swape_r(BUF(buf, len), REG_HI(rd)); break; default: /* The caller must have handled this. */ @@ -2271,9 +2276,9 @@ static inline u8 frame_create(u8 *buf, u16 size) len = arc_mov_r(buf, ARC_R_FP, ARC_R_SP); if (IN_U6_RANGE(size)) - len += arc_subi_r(buf+len, ARC_R_SP, size); + len += arc_subi_r(BUF(buf, len), ARC_R_SP, size); else - len += arc_sub_i(buf+len, ARC_R_SP, size); + len += arc_sub_i(BUF(buf, len), ARC_R_SP, size); return len; } @@ -2298,7 +2303,7 @@ static u8 bpf_to_arc_return(u8 *buf) u8 len; len = arc_mov_r(buf, ARC_R_0, REG_LO(BPF_REG_0)); - len += arc_mov_r(buf+len, ARC_R_1, REG_HI(BPF_REG_0)); + len += arc_mov_r(BUF(buf, len), ARC_R_1, REG_HI(BPF_REG_0)); return len; } @@ -2313,7 +2318,7 @@ u8 arc_to_bpf_return(u8 *buf) u8 len; len = arc_mov_r(buf, REG_LO(BPF_REG_0), ARC_R_0); - len += arc_mov_r(buf+len, REG_HI(BPF_REG_0), ARC_R_1); + len += arc_mov_r(BUF(buf, len), REG_HI(BPF_REG_0), ARC_R_1); return len; } @@ -2342,7 +2347,7 @@ static u8 jump_and_link(u8 *buf, u32 addr) u8 len; len = arc_mov_i_fixed(buf, REG_LO(JIT_REG_TMP), addr); - len += arc_jl(buf+len, REG_LO(JIT_REG_TMP)); + len += arc_jl(BUF(buf, len), REG_LO(JIT_REG_TMP)); return len; } @@ -2401,22 +2406,22 @@ u8 arc_prologue(u8 *buf, u32 usage, u16 frame_size) /* Deal with blink first. */ if (usage & BIT(ARC_R_BLINK)) - len += arc_push_r(buf+len, ARC_R_BLINK); + len += arc_push_r(BUF(buf, len), ARC_R_BLINK); gp_regs = usage & ~(BIT(ARC_R_BLINK) | BIT(ARC_R_FP)); while (gp_regs) { u8 reg = __builtin_ffs(gp_regs) - 1; - len += arc_push_r(buf+len, reg); + len += arc_push_r(BUF(buf, len), reg); gp_regs &= ~BIT(reg); } /* Deal with fp last. */ if ((usage & BIT(ARC_R_FP)) || (frame_size > 0)) - len += arc_push_r(buf+len, ARC_R_FP); + len += arc_push_r(BUF(buf, len), ARC_R_FP); if (frame_size > 0) - len += frame_create(buf+len, frame_size); + len += frame_create(BUF(buf, len), frame_size); #ifdef ARC_BPF_JIT_DEBUG if ((usage & BIT(ARC_R_FP)) && (frame_size == 0)) { @@ -2453,28 +2458,28 @@ u8 arc_epilogue(u8 *buf, u32 usage, u16 frame_size) #endif if (frame_size > 0) - len += frame_restore(buf+len); + len += frame_restore(BUF(buf, len)); /* Deal with fp first. */ if ((usage & BIT(ARC_R_FP)) || (frame_size > 0)) - len += arc_pop_r(buf+len, ARC_R_FP); + len += arc_pop_r(BUF(buf, len), ARC_R_FP); gp_regs = usage & ~(BIT(ARC_R_BLINK) | BIT(ARC_R_FP)); while (gp_regs) { /* "usage" is 32-bit, each bit indicating an ARC register. */ u8 reg = 31 - __builtin_clz(gp_regs); - len += arc_pop_r(buf+len, reg); + len += arc_pop_r(BUF(buf, len), reg); gp_regs &= ~BIT(reg); } /* Deal with blink last. */ if (usage & BIT(ARC_R_BLINK)) - len += arc_pop_r(buf+len, ARC_R_BLINK); + len += arc_pop_r(BUF(buf, len), ARC_R_BLINK); /* Wrap up the return value and jump back to the caller. */ - len += bpf_to_arc_return(buf+len); - len += arc_jmp_return(buf+len); + len += bpf_to_arc_return(BUF(buf, len)); + len += arc_jmp_return(BUF(buf, len)); return len; } @@ -2672,10 +2677,10 @@ static int gen_j_eq_64(u8 *buf, u8 rd, u8 rs, bool eq, s32 disp; u8 len = 0; - len += arc_cmp_r(buf+len, REG_HI(rd), REG_HI(rs)); - len += arc_cmpz_r(buf+len, REG_LO(rd), REG_LO(rs)); + len += arc_cmp_r(BUF(buf, len), REG_HI(rd), REG_HI(rs)); + len += arc_cmpz_r(BUF(buf, len), REG_LO(rd), REG_LO(rs)); disp = get_displacement(curr_off + len, targ_off); - len += arc_bcc(buf+len, eq ? CC_equal : CC_unequal, disp); + len += arc_bcc(BUF(buf, len), eq ? CC_equal : CC_unequal, disp); return len; } @@ -2690,10 +2695,10 @@ static u8 gen_jset_64(u8 *buf, u8 rd, u8 rs, u32 curr_off, u32 targ_off) u8 len = 0; s32 disp; - len += arc_tst_r(buf+len, REG_HI(rd), REG_HI(rs)); - len += arc_tstz_r(buf+len, REG_LO(rd), REG_LO(rs)); + len += arc_tst_r(BUF(buf, len), REG_HI(rd), REG_HI(rs)); + len += arc_tstz_r(BUF(buf, len), REG_LO(rd), REG_LO(rs)); disp = get_displacement(curr_off + len, targ_off); - len += arc_bcc(buf+len, CC_unequal, disp); + len += arc_bcc(BUF(buf, len), CC_unequal, disp); return len; } @@ -2808,19 +2813,19 @@ static u8 gen_jcc_64(u8 *buf, u8 rd, u8 rs, u8 cond, /* b @target */ disp = get_displacement(curr_off + len, targ_off); - len += arc_bcc(buf+len, cc[0], disp); + len += arc_bcc(BUF(buf, len), cc[0], disp); /* b @end */ end_off = curr_off + len + (JCC64_INSNS_TO_END * INSN_len_normal); disp = get_displacement(curr_off + len, end_off); - len += arc_bcc(buf+len, cc[1], disp); + len += arc_bcc(BUF(buf, len), cc[1], disp); /* cmp rd_lo, rs_lo */ - len += arc_cmp_r(buf+len, REG_LO(rd), REG_LO(rs)); + len += arc_cmp_r(BUF(buf, len), REG_LO(rd), REG_LO(rs)); /* b @target */ disp = get_displacement(curr_off + len, targ_off); - len += arc_bcc(buf+len, cc[2], disp); + len += arc_bcc(BUF(buf, len), cc[2], disp); return len; } @@ -2960,7 +2965,7 @@ u8 gen_jmp_32(u8 *buf, u8 rd, u8 rs, u8 cond, u32 curr_off, u32 targ_off) * should always point to the jump instruction. */ disp = get_displacement(curr_off + len, targ_off); - len += arc_bcc(buf+len, arcv2_32_jmps[cond], disp); + len += arc_bcc(BUF(buf, len), arcv2_32_jmps[cond], disp); } else { /* The straight forward unconditional jump. */ disp = get_displacement(curr_off, targ_off); @@ -2990,12 +2995,12 @@ u8 gen_func_call(u8 *buf, ARC_ADDR func_addr, bool external_func) * is done. The stack is readjusted either way after the call. */ if (external_func) - len += push_r64(buf+len, BPF_REG_5); + len += push_r64(BUF(buf, len), BPF_REG_5); - len += jump_and_link(buf+len, func_addr); + len += jump_and_link(BUF(buf, len), func_addr); if (external_func) - len += arc_add_i(buf+len, ARC_R_SP, ARC_R_SP, ARG5_SIZE); + len += arc_add_i(BUF(buf, len), ARC_R_SP, ARC_R_SP, ARG5_SIZE); return len; } diff --git a/arch/arc/net/bpf_jit_core.c b/arch/arc/net/bpf_jit_core.c index 730a715d324e..eea1a469a195 100644 --- a/arch/arc/net/bpf_jit_core.c +++ b/arch/arc/net/bpf_jit_core.c @@ -9,7 +9,6 @@ #include "bpf_jit.h" /* Sane initial values for the globals */ -bool emit = true; bool zext_thyself = true; /* @@ -86,6 +85,7 @@ struct arc_jit_data { * orig_prog: The original eBPF program before any possible change. * jit: The JIT buffer and its length. * bpf_header: The JITed program header. "jit.buf" points inside it. + * emit: If set, opcodes are written to memory; else, a dry-run. * bpf2insn: Maps BPF insn indices to their counterparts in jit.buf. * bpf2insn_valid: Indicates if "bpf2ins" is populated with the mappings. * jit_data: A piece of memory to transfer data to the next pass. @@ -104,6 +104,7 @@ struct jit_context { struct bpf_prog *orig_prog; struct jit_buffer jit; struct bpf_binary_header *bpf_header; + bool emit; u32 *bpf2insn; bool bpf2insn_valid; struct arc_jit_data *jit_data; @@ -248,8 +249,8 @@ static void jit_ctx_cleanup(struct jit_context *ctx) ctx->jit.len = 0; } + ctx->emit = false; /* Global booleans set to false. */ - emit = false; zext_thyself = false; } @@ -277,14 +278,14 @@ static void analyze_reg_usage(struct jit_context *ctx) } /* Verify that no instruction will be emitted when there is no buffer. */ -static inline int jit_buffer_check(const struct jit_buffer *jbuf) +static inline int jit_buffer_check(const struct jit_context *ctx) { - if (emit == true) { - if (jbuf->buf == NULL) { + if (ctx->emit == true) { + if (ctx->jit.buf == NULL) { pr_err("bpf-jit: inconsistence state; no " "buffer to emit instructions.\n"); return -EINVAL; - } else if (jbuf->index > jbuf->len) { + } else if (ctx->jit.index > ctx->jit.len) { pr_err("bpf-jit: estimated JIT length is less " "than the emitted instructions.\n"); return -EFAULT; @@ -294,31 +295,31 @@ static inline int jit_buffer_check(const struct jit_buffer *jbuf) } /* On a dry-run (emit=false), "jit.len" is growing gradually. */ -static inline void jit_buffer_update(struct jit_buffer *jbuf, u32 n) +static inline void jit_buffer_update(struct jit_context *ctx, u32 n) { - if (!emit) - jbuf->len += n; + if (!ctx->emit) + ctx->jit.len += n; else - jbuf->index += n; + ctx->jit.index += n; } /* Based on "emit", determine the address where instructions are emitted. */ -static inline u8 *effective_jit_buf(const struct jit_buffer *jbuf) +static inline u8 *effective_jit_buf(const struct jit_context *ctx) { - return emit ? jbuf->buf + jbuf->index : NULL; + return ctx->emit ? (ctx->jit.buf + ctx->jit.index) : NULL; } /* Prologue based on context variables set by "analyze_reg_usage()". */ static int handle_prologue(struct jit_context *ctx) { int ret; - u8 *buf = effective_jit_buf(&ctx->jit); + u8 *buf = effective_jit_buf(ctx); u32 len = 0; - CHECK_RET(jit_buffer_check(&ctx->jit)); + CHECK_RET(jit_buffer_check(ctx)); len = arc_prologue(buf, ctx->arc_regs_clobbered, ctx->frame_size); - jit_buffer_update(&ctx->jit, len); + jit_buffer_update(ctx, len); return 0; } @@ -327,13 +328,13 @@ static int handle_prologue(struct jit_context *ctx) static int handle_epilogue(struct jit_context *ctx) { int ret; - u8 *buf = effective_jit_buf(&ctx->jit); + u8 *buf = effective_jit_buf(ctx); u32 len = 0; - CHECK_RET(jit_buffer_check(&ctx->jit)); + CHECK_RET(jit_buffer_check(ctx)); len = arc_epilogue(buf, ctx->arc_regs_clobbered, ctx->frame_size); - jit_buffer_update(&ctx->jit, len); + jit_buffer_update(ctx, len); return 0; } @@ -597,7 +598,7 @@ static int handle_jumps(const struct jit_context *ctx, { u8 cond; int ret = 0; - u8 *buf = effective_jit_buf(&ctx->jit); + u8 *buf = effective_jit_buf(ctx); const bool j32 = (BPF_CLASS(insn->code) == BPF_JMP32) ? true : false; const u8 rd = insn->dst_reg; u8 rs = insn->src_reg; @@ -622,10 +623,10 @@ static int handle_jumps(const struct jit_context *ctx, */ if (has_imm(insn) && (cond != ARC_CC_AL)) { if (j32) { - *len += mov_r32_i32(buf + *len, JIT_REG_TMP, + *len += mov_r32_i32(BUF(buf, *len), JIT_REG_TMP, insn->imm); } else { - *len += mov_r64_i32(buf + *len, JIT_REG_TMP, + *len += mov_r64_i32(BUF(buf, *len), JIT_REG_TMP, insn->imm); } rs = JIT_REG_TMP; @@ -641,10 +642,10 @@ static int handle_jumps(const struct jit_context *ctx, } if (j32) { - *len += gen_jmp_32(buf + *len, rd, rs, cond, + *len += gen_jmp_32(BUF(buf, *len), rd, rs, cond, curr_off, targ_off); } else { - *len += gen_jmp_64(buf + *len, rd, rs, cond, + *len += gen_jmp_64(BUF(buf, *len), rd, rs, cond, curr_off, targ_off); } @@ -655,7 +656,7 @@ static int handle_jumps(const struct jit_context *ctx, static int handle_jmp_epilogue(struct jit_context *ctx, const struct bpf_insn *insn, u8 *len) { - u8 *buf = effective_jit_buf(&ctx->jit); + u8 *buf = effective_jit_buf(ctx); u32 curr_off = 0, epi_off = 0; /* Check the offset only if the data is available. */ @@ -683,7 +684,7 @@ static int handle_call(struct jit_context *ctx, int ret; bool in_kernel_func, fixed = false; u64 addr = 0; - u8 *buf = effective_jit_buf(&ctx->jit); + u8 *buf = effective_jit_buf(ctx); ret = bpf_jit_get_func_addr(ctx->prog, insn, ctx->is_extra_pass, &addr, &fixed); @@ -701,7 +702,7 @@ static int handle_call(struct jit_context *ctx, if (insn->src_reg != BPF_PSEUDO_CALL) { /* Assigning ABI's return reg to JIT's return reg. */ - *len += arc_to_bpf_return(buf + *len); + *len += arc_to_bpf_return(BUF(buf, *len)); } return 0; @@ -718,7 +719,7 @@ static int handle_ld_imm64(struct jit_context *ctx, u8 *len) { const s32 idx = get_index_for_insn(ctx, insn); - u8 *buf = effective_jit_buf(&ctx->jit); + u8 *buf = effective_jit_buf(ctx); /* We're about to consume 2 VM instructions. */ if (is_last_insn(ctx->prog, idx)) { @@ -754,7 +755,7 @@ static int handle_insn(struct jit_context *ctx, u32 idx) const u8 src = insn->src_reg; const s16 off = insn->off; const s32 imm = insn->imm; - u8 *buf = effective_jit_buf(&ctx->jit); + u8 *buf = effective_jit_buf(ctx); u8 len = 0; int ret = 0; @@ -1053,10 +1054,10 @@ static int handle_insn(struct jit_context *ctx, u32 idx) * takes care of calling "zext()" based on the input "size". */ if (BPF_OP(code) != BPF_END) - len += zext(buf+len, dst); + len += zext(BUF(buf, len), dst); } - jit_buffer_update(&ctx->jit, len); + jit_buffer_update(ctx, len); return ret; } @@ -1067,14 +1068,14 @@ static int handle_body(struct jit_context *ctx) bool populate_bpf2insn = false; const struct bpf_prog *prog = ctx->prog; - CHECK_RET(jit_buffer_check(&ctx->jit)); + CHECK_RET(jit_buffer_check(ctx)); /* * Record the mapping for the instructions during the dry-run. * Doing it this way allows us to have the mapping ready for * the jump instructions during the real compilation phase. */ - if (!emit) + if (!ctx->emit) populate_bpf2insn = true; for (u32 i = 0; i < prog->len; i++) { @@ -1173,7 +1174,7 @@ static int jit_prepare(struct jit_context *ctx) int ret; /* Dry run. */ - emit = false; + ctx->emit = false; CHECK_RET(jit_prepare_early_mem_alloc(ctx)); @@ -1207,7 +1208,7 @@ static int jit_compile(struct jit_context *ctx) int ret; /* Let there be code. */ - emit = true; + ctx->emit = true; CHECK_RET(handle_prologue(ctx)); @@ -1252,7 +1253,8 @@ static void jit_finalize(struct jit_context *ctx) */ bpf_jit_binary_lock_ro(ctx->bpf_header); flush_icache_range((unsigned long) ctx->bpf_header, - (unsigned long) ctx->jit.buf + ctx->jit.len); + (unsigned long) + BUF(ctx->jit.buf, ctx->jit.len)); prog->aux->jit_data = NULL; bpf_prog_fill_jited_linfo(prog, ctx->bpf2insn); } @@ -1315,7 +1317,7 @@ static int jit_patch_relocations(struct jit_context *ctx) const struct bpf_prog *prog = ctx->prog; int ret; - emit = true; + ctx->emit = true; for (u32 i = 0; i < prog->len; i++) { const struct bpf_insn *insn = &prog->insnsi[i]; u8 dummy; @@ -1341,7 +1343,7 @@ static int jit_patch_relocations(struct jit_context *ctx) * to get the necessary data for the real compilation phase, * jit_compile(). */ -struct bpf_prog *do_normal_pass(struct bpf_prog *prog) +static struct bpf_prog *do_normal_pass(struct bpf_prog *prog) { struct jit_context ctx; @@ -1377,7 +1379,7 @@ struct bpf_prog *do_normal_pass(struct bpf_prog *prog) * again to get the newly translated addresses in order to resolve * the "call"s. */ -struct bpf_prog *do_extra_pass(struct bpf_prog *prog) +static struct bpf_prog *do_extra_pass(struct bpf_prog *prog) { struct jit_context ctx; From patchwork Tue Apr 30 15:09:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shahab Vahedi X-Patchwork-Id: 1929665 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=fIauMcSf; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=vahedi.org header.i=@vahedi.org header.a=rsa-sha256 header.s=key1 header.b=nOOXDr/O; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4VTNt340Z8z23rX for ; Wed, 1 May 2024 01:10:19 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CcXgqQRLRGQ14AKgDtjIg2L9TmT3L5naKn5voquxoCQ=; b=fIauMcSf1FIgAe F73k5SSfDy5OJqWmZLKkMW6CbdsaAcUB9L+nrPsOgqTJINaVHP3mnZTHFW++4iSzgg48nbXnJJCBU u1MK1upFCpsbshA/GRcKxBbQ4+yJTzQ6g0yATpoXhFxK6oJpC7bV/x0X2ZO+0evLTQK5UAGzR8KO0 G93ULtbkJN/wBE9tVV7BW5SeefOvdf/HN7lA7rLtkhQ7QtJEgd3M3hHTCSM1CnbxRU2m74XVvwpGS x8XUAGZey5RXiQrns4koJMAJktEYNEiBhyy1+nAxQ3XUY1XrQIUQ9R/EcsTvfumIhjjkGeYJciKhq B46rucNH9sT64GpjZcLw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s1p7Z-0000000702F-4AtN; Tue, 30 Apr 2024 15:10:18 +0000 Received: from out-188.mta0.migadu.com ([2001:41d0:1004:224b::bc]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s1p7T-00000006zv5-1xj7 for linux-snps-arc@lists.infradead.org; Tue, 30 Apr 2024 15:10:14 +0000 X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vahedi.org; s=key1; t=1714489808; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9b3XzO2xzKYs1JrShujVR/pAtQLI0ERqzQjQ8vNsA4I=; b=nOOXDr/OMCVbEW51CdavyoRaKGoTqWyadwr1uaYqyj7X4mXJBKg1As4x7MbpHRqppt+w6Y +IvH9TwZn/3tsYjMtroZ3DMBc8DJ9EnuaCJbdiJ3wb6iY3bpKBTOVoVDS8jcI26gBQD4+H kMjhuzI49Cg/hacNvgfsvcAFs7X/k5ccxH4jAJcy0h6ZGlsS50UaeInUsCU8dYnZfPasu7 4Vf/aUzlVxpGKKc0NCvgq/sPAqH+gu+j8eyJukCiXHs8Bb4+VATf5LdEkPfC3PiHAFY8P1 UrQYy6qk/r/pCfmpzQx4DUCavV5DDwVevHhye3U2YbIRDGS6LGKGZT8wHi4DOA== From: Shahab Vahedi To: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= Cc: Shahab Vahedi , Shahab Vahedi , Vineet Gupta , linux-snps-arc@lists.infradead.org Subject: [PATCH 2/7] v2: Turn "zext_thyself" from global into a context var Date: Tue, 30 Apr 2024 17:09:32 +0200 Message-Id: <20240430150937.39793-3-list+bpf@vahedi.org> In-Reply-To: <20240430150937.39793-1-list+bpf@vahedi.org> References: <20240430150937.39793-1-list+bpf@vahedi.org> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240430_081011_987746_0159A4E5 X-CRM114-Status: GOOD ( 23.79 ) X-Spam-Score: -0.2 (/) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: From: Shahab Vahedi also: - Update some comments along the way. - Refactor the gen_swap()'s "if/else" to present the logic better - Remove "extern" from the proto-type --- arch/arc/net/bpf_jit.h | 14 +++++----- arch/arc/net/bp [...] Content analysis details: (-0.2 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-snps-arc@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Linux on Synopsys ARC Processors List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-snps-arc" Errors-To: linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org From: Shahab Vahedi also: - Update some comments along the way. - Refactor the gen_swap()'s "if/else" to present the logic better - Remove "extern" from the proto-type --- arch/arc/net/bpf_jit.h | 14 +++++----- arch/arc/net/bpf_jit_arcv2.c | 51 ++++++++++++++++++------------------ arch/arc/net/bpf_jit_core.c | 25 +++++++++--------- 3 files changed, 44 insertions(+), 46 deletions(-) diff --git a/arch/arc/net/bpf_jit.h b/arch/arc/net/bpf_jit.h index ecad47b8b796..9fc70d97415b 100644 --- a/arch/arc/net/bpf_jit.h +++ b/arch/arc/net/bpf_jit.h @@ -37,10 +37,6 @@ */ #define BUF(b, n) (((b) != NULL) ? ((b) + (n)) : (b)) -/************* Globals that have effects on code generation ***********/ -/* An indicator if zero-extend must be done for the 32-bit operations. */ -extern bool zext_thyself; - /************** Functions that the back-end must provide **************/ /* Extension for 32-bit operations. */ extern inline u8 zext(u8 *buf, u8 rd); @@ -156,10 +152,12 @@ extern u8 gen_jmp_64(u8 *buf, u8 rd, u8 rs, u8 cond, u32 c_off, u32 t_off); extern u8 gen_func_call(u8 *buf, ARC_ADDR func_addr, bool external_func); extern u8 arc_to_bpf_return(u8 *buf); /* - * Perform byte swaps on "rd" based on the "size". If "force" is - * set to "true", do it unconditionally. Otherwise, consider the - * desired "endian"ness and the host endianness. + * - Perform byte swaps on "rd" based on the "size". + * - If "force" is set, do it unconditionally. Otherwise, consider the + * desired "endian"ness and the host endianness. + * - For data "size"s up to 32 bits, perform a zero-extension if asked + * by the "do_zext" boolean. */ -extern u8 gen_swap(u8 *buf, u8 rd, u8 size, u8 endian, bool force); +u8 gen_swap(u8 *buf, u8 rd, u8 size, u8 endian, bool force, bool do_zext); #endif /* _ARC_BPF_JIT_H */ diff --git a/arch/arc/net/bpf_jit_arcv2.c b/arch/arc/net/bpf_jit_arcv2.c index b9e803f04a36..8b7ae2f11f38 100644 --- a/arch/arc/net/bpf_jit_arcv2.c +++ b/arch/arc/net/bpf_jit_arcv2.c @@ -1305,7 +1305,7 @@ static u8 arc_b(u8 *buf, s32 offset) inline u8 zext(u8 *buf, u8 rd) { - if (zext_thyself && rd != BPF_REG_FP) + if (rd != BPF_REG_FP) return arc_movi_r(buf, REG_HI(rd), 0); else return 0; @@ -2198,7 +2198,7 @@ u8 arsh_r64_i32(u8 *buf, u8 rd, s32 imm) return len; } -u8 gen_swap(u8 *buf, u8 rd, u8 size, u8 endian, bool force) +u8 gen_swap(u8 *buf, u8 rd, u8 size, u8 endian, bool force, bool do_zext) { u8 len = 0; #ifdef __BIG_ENDIAN @@ -2206,36 +2206,19 @@ u8 gen_swap(u8 *buf, u8 rd, u8 size, u8 endian, bool force) #else const u8 host_endian = BPF_FROM_LE; #endif - /* - * If the same endianness, there's not much to do other - * than zeroing out the upper bytes based on the "size". - */ - if ((force == false) && (host_endian == endian)) { - switch (size) { - case 16: - len += arc_and_i(BUF(buf, len), REG_LO(rd), 0xffff); - fallthrough; - case 32: - len += zext(BUF(buf, len), rd); - fallthrough; - case 64: - break; - default: - /* The caller must have handled this. */ - } - } else { + if (host_endian != endian || force) { switch (size) { case 16: /* * r = B4B3_B2B1 << 16 --> r = B2B1_0000 - * swape(r) is 0000_B1B2 + * then, swape(r) would become the desired 0000_B1B2 */ - len += arc_asli_r(BUF(buf, len), - REG_LO(rd), REG_LO(rd), 16); + len = arc_asli_r(buf, REG_LO(rd), REG_LO(rd), 16); fallthrough; case 32: len += arc_swape_r(BUF(buf, len), REG_LO(rd)); - len += zext(BUF(buf, len), rd); + if (do_zext) + len += zext(BUF(buf, len), rd); break; case 64: /* @@ -2245,7 +2228,7 @@ u8 gen_swap(u8 *buf, u8 rd, u8 size, u8 endian, bool force) * hi ^= lo; * and then swap the bytes in "hi" and "lo". */ - len += arc_xor_r(BUF(buf, len), REG_HI(rd), REG_LO(rd)); + len = arc_xor_r(buf, REG_HI(rd), REG_LO(rd)); len += arc_xor_r(BUF(buf, len), REG_LO(rd), REG_HI(rd)); len += arc_xor_r(BUF(buf, len), REG_HI(rd), REG_LO(rd)); len += arc_swape_r(BUF(buf, len), REG_LO(rd)); @@ -2254,6 +2237,24 @@ u8 gen_swap(u8 *buf, u8 rd, u8 size, u8 endian, bool force) default: /* The caller must have handled this. */ } + } else { + /* + * If the same endianness, there's not much to do other + * than zeroing out the upper bytes based on the "size". + */ + switch (size) { + case 16: + len = arc_and_i(buf, REG_LO(rd), 0xffff); + fallthrough; + case 32: + if (do_zext) + len += zext(BUF(buf, len), rd); + break; + case 64: + break; + default: + /* The caller must have handled this. */ + } } return len; diff --git a/arch/arc/net/bpf_jit_core.c b/arch/arc/net/bpf_jit_core.c index eea1a469a195..79ec0bbf1153 100644 --- a/arch/arc/net/bpf_jit_core.c +++ b/arch/arc/net/bpf_jit_core.c @@ -8,9 +8,6 @@ #include #include "bpf_jit.h" -/* Sane initial values for the globals */ -bool zext_thyself = true; - /* * Check for the return value. A pattern used oftenly in this file. * There must be a "ret" variable of type "int" in the scope. @@ -86,6 +83,7 @@ struct arc_jit_data { * jit: The JIT buffer and its length. * bpf_header: The JITed program header. "jit.buf" points inside it. * emit: If set, opcodes are written to memory; else, a dry-run. + * do_zext: If true, 32-bit sub-regs must be zero extended. * bpf2insn: Maps BPF insn indices to their counterparts in jit.buf. * bpf2insn_valid: Indicates if "bpf2ins" is populated with the mappings. * jit_data: A piece of memory to transfer data to the next pass. @@ -105,6 +103,7 @@ struct jit_context { struct jit_buffer jit; struct bpf_binary_header *bpf_header; bool emit; + bool do_zext; u32 *bpf2insn; bool bpf2insn_valid; struct arc_jit_data *jit_data; @@ -185,7 +184,7 @@ static int jit_ctx_init(struct jit_context *ctx, struct bpf_prog *prog) ctx->success = false; /* If the verifier doesn't zero-extend, then we have to do it. */ - zext_thyself = !ctx->prog->aux->verifier_zext; + ctx->do_zext = !ctx->prog->aux->verifier_zext; return 0; } @@ -250,8 +249,7 @@ static void jit_ctx_cleanup(struct jit_context *ctx) } ctx->emit = false; - /* Global booleans set to false. */ - zext_thyself = false; + ctx->do_zext = false; } /* @@ -415,7 +413,7 @@ static inline void set_need_for_extra_pass(struct jit_context *ctx) * the back-end for the swap. */ static int handle_swap(u8 *buf, u8 rd, u8 size, u8 endian, - bool force, u8 *len) + bool force, bool do_zext, u8 *len) { /* Sanity check on the size. */ switch (size) { @@ -428,7 +426,7 @@ static int handle_swap(u8 *buf, u8 rd, u8 size, u8 endian, return -EINVAL; } - *len = gen_swap(buf, rd, size, endian, force); + *len = gen_swap(buf, rd, size, endian, force, do_zext); return 0; } @@ -866,7 +864,7 @@ static int handle_insn(struct jit_context *ctx, u32 idx) case BPF_ALU64 | BPF_END | BPF_FROM_LE: { CHECK_RET(handle_swap(buf, dst, imm, BPF_SRC(code), BPF_CLASS(code) == BPF_ALU64, - &len)); + ctx->do_zext, &len)); break; } /* dst += src (64-bit) */ @@ -1049,11 +1047,12 @@ static int handle_insn(struct jit_context *ctx, u32 idx) if (BPF_CLASS(code) == BPF_ALU) { /* - * Even 64-bit swaps are of type BPF_ALU (and not BPF_ALU64). - * Therefore, the routine responsible for "swap" specifically - * takes care of calling "zext()" based on the input "size". + * Skip the "swap" instructions. Even 64-bit swaps are of type + * BPF_ALU (and not BPF_ALU64). Therefore, for the swaps, one + * has to look at the "size" of the operations rather than the + * ALU type. "gen_swap()" specifically takes care of that. */ - if (BPF_OP(code) != BPF_END) + if (BPF_OP(code) != BPF_END && ctx->do_zext) len += zext(BUF(buf, len), dst); } From patchwork Tue Apr 30 15:09:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shahab Vahedi X-Patchwork-Id: 1929664 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=LF9SHHHa; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=vahedi.org header.i=@vahedi.org header.a=rsa-sha256 header.s=key1 header.b=FNfkPm1c; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4VTNt24gvFz23jG for ; Wed, 1 May 2024 01:10:18 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=f3yT6lB6DEwDXFvRIO7o3UwDojVY/o5++MhI+56RTDs=; b=LF9SHHHauFuLFl 1Y87WJKHbsBjmAqGuaQLY5QVuQ177tAUN9B3sqKgL7Ej/qwSCVkoI/U+XQ37tZKtdhnaVYp48KBHV fZ2eo6lFQ1m3b6i/mB5fbCV3ULYqFRAN9hAgqnQHaj2JOkwvNl1P9qZERQtVT2nrljfi/3Mwi2zMz /TTA1pb6LStUKN4sb6Mtho4hNnlmVMZkJMNV8tToSuW4IrIU9wjnl63tDvLscJmQGK+LngZ6aq5JQ s9ecAvvgNfNOLzR8NeEnfixUyPsrNPbPQnLCPGrP7EZfQmjj1wPg7ANUQ3s5maLRrNqGDPUdIdpUj yvhs6rPe0h+AJ4bkc3FA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s1p7Z-0000000701w-1Uk7; Tue, 30 Apr 2024 15:10:17 +0000 Received: from out-175.mta0.migadu.com ([2001:41d0:1004:224b::af]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s1p7T-00000006zvU-2SzX for linux-snps-arc@lists.infradead.org; Tue, 30 Apr 2024 15:10:14 +0000 X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vahedi.org; s=key1; t=1714489808; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=S/gFata7LB5jkMOWDAr1ONlFIHgWqNBA0WgGFOg3d/E=; b=FNfkPm1cxEKK+4PAS8/PHhbtHEYRdLXKwv6O5hr8AqN+CSrXLSeDIOdf4B31xowb5Svbh/ MsMzcEZGikpmf6M28bDpbHONAEcADs0T8mQJ/c/imunHkzGPHjZzLqSpyb6H0h5pi84/lg 6y0rFsFT9FX/76QCGsbVk3knwSYpaHSiGTRzxs5NXq2KPXqjJVla6C1S7nPyggB3nyKVkD ArS6m1wplhuqZLVjuaQZ42y1XoZ3rdVsjBRIHVPlOL4D1yRitCKlXkazp1NcCDUSA3aWJD m0KwlEE5vUU5E99A6gCLBXVll4TI+TSEoAhmS1yrzNuThDjHDVGuK0ER4EyG/g== From: Shahab Vahedi To: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= Cc: Shahab Vahedi , Shahab Vahedi , Vineet Gupta , linux-snps-arc@lists.infradead.org Subject: [PATCH 3/7] v2: Use memset() in jit_ctx_init() Date: Tue, 30 Apr 2024 17:09:33 +0200 Message-Id: <20240430150937.39793-4-list+bpf@vahedi.org> In-Reply-To: <20240430150937.39793-1-list+bpf@vahedi.org> References: <20240430150937.39793-1-list+bpf@vahedi.org> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240430_081011_963449_8720AB82 X-CRM114-Status: UNSURE ( 9.64 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -0.2 (/) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: From: Shahab Vahedi --- arch/arc/net/bpf_jit_core.c | 21 +++++ 1 file changed, 5 insertions(+), 16 deletions(-) diff --git a/arch/arc/net/bpf_jit_core.c b/arch/arc/net/bpf_jit_core.c index 79ec0bbf1153..9c0fdd514967 100644 --- a/arch/arc/net/bpf_jit_core.c +++ b/arch/arc/net/bpf_jit_core.c @@ -159,6 +159,8 @@ s [...] Content analysis details: (-0.2 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-snps-arc@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Linux on Synopsys ARC Processors List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-snps-arc" Errors-To: linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org From: Shahab Vahedi --- arch/arc/net/bpf_jit_core.c | 21 +++++---------------- 1 file changed, 5 insertions(+), 16 deletions(-) diff --git a/arch/arc/net/bpf_jit_core.c b/arch/arc/net/bpf_jit_core.c index 79ec0bbf1153..9c0fdd514967 100644 --- a/arch/arc/net/bpf_jit_core.c +++ b/arch/arc/net/bpf_jit_core.c @@ -159,6 +159,8 @@ static void jit_dump(const struct jit_context *ctx) /* Initialise the context so there's no garbage. */ static int jit_ctx_init(struct jit_context *ctx, struct bpf_prog *prog) { + memset(ctx, 0, sizeof(ctx)); + ctx->orig_prog = prog; /* If constant blinding was requested but failed, scram. */ @@ -167,25 +169,12 @@ static int jit_ctx_init(struct jit_context *ctx, struct bpf_prog *prog) return PTR_ERR(ctx->prog); ctx->blinded = (ctx->prog == ctx->orig_prog ? false : true); - ctx->jit.buf = NULL; - ctx->jit.len = 0; - ctx->jit.index = 0; - ctx->bpf_header = NULL; - ctx->bpf2insn = NULL; - ctx->bpf2insn_valid = false; - ctx->jit_data = NULL; - ctx->arc_regs_clobbered = 0; - ctx->save_blink = false; - ctx->frame_size = 0; - ctx->epilogue_offset = 0; - ctx->need_extra_pass = false; - ctx->is_extra_pass = ctx->prog->jited; - ctx->user_bpf_prog = ctx->prog->is_func; - ctx->success = false; - /* If the verifier doesn't zero-extend, then we have to do it. */ ctx->do_zext = !ctx->prog->aux->verifier_zext; + ctx->is_extra_pass = ctx->prog->jited; + ctx->user_bpf_prog = ctx->prog->is_func; + return 0; } From patchwork Tue Apr 30 15:09:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shahab Vahedi X-Patchwork-Id: 1929663 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=RIg19ssW; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=vahedi.org header.i=@vahedi.org header.a=rsa-sha256 header.s=key1 header.b=ACU259mx; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4VTNt23Wrkz20fY for ; Wed, 1 May 2024 01:10:18 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nArR35SoCH+JJYpn71nsNduxrKY2AURJjrZl7t33lTE=; b=RIg19ssW7LNbgz uUR6VfnYSntqjGmksZrnypuWwG6CJnwxQppzLoWZ6FNweQC0MiDuQ4xWDJx0ypMYHyz6rK9n6N8DO 2xDQVoeEK7sOkXEmPNtiVDln8YHqzuShtn+I8ZkPXxqVUZQR/knvFeeJ6f4bhdVV6mJcECpc1AM9w zpHgTiv4r2NMZ+8fjI8iKfEbFV0uzR4gDAnsYwIwoxbEweGB5L75yDNyr2pxjfUe+fkStitkBhMee 4ncdbVUNpupYjY+XeXL4nA2ZPCIRyGlhDLwLmgRcMSvgAwrgt9Bt8o9audk5OCqBtcAFqtELAFpgc u3zYdyyxcJdqW51+2bng==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s1p7Y-0000000701Q-35yL; Tue, 30 Apr 2024 15:10:16 +0000 Received: from out-182.mta0.migadu.com ([2001:41d0:1004:224b::b6]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s1p7T-00000006zvz-2U3Z for linux-snps-arc@lists.infradead.org; Tue, 30 Apr 2024 15:10:14 +0000 X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vahedi.org; s=key1; t=1714489808; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4ThTd2juqdVL308+cYDL98JCaciOljdIvUBHbbZggJQ=; b=ACU259mxRShPhuGVsVphiAcDszt2IU0kiEMJrvUyfJ1S/0rvLzPN39REuYu1bV1WicOFLd AA+EbDMP/wPErJIhknol+JDFeKeYjq89v2RaWTiVjYkagy4aOUkwpe6cGwU4LGePmMAaQt 1LnZw+V/XXrMu4L8i7kNQpbS2/oKsF8JlMabsiFI2AzVmqx9NF9t0R5nc0wB8fJcqR0B3P arQ8kqMX6YNs43sVPREO0qIr7hqwXVz2FFoMtB8PfSpeZuC8waRF8vWM/yJPHqTdEFsowN J3ki315r+8JQYxbLgryqwBppFanz7VYqFys/nFVjcn5B77HLngn2cBqqcjFmJw== From: Shahab Vahedi To: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= Cc: Shahab Vahedi , Shahab Vahedi , Vineet Gupta , linux-snps-arc@lists.infradead.org Subject: [PATCH 4/7] v2: MAINTAINERS: Add "BPF JIT for ARC" entry Date: Tue, 30 Apr 2024 17:09:34 +0200 Message-Id: <20240430150937.39793-5-list+bpf@vahedi.org> In-Reply-To: <20240430150937.39793-1-list+bpf@vahedi.org> References: <20240430150937.39793-1-list+bpf@vahedi.org> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240430_081012_073244_5C0FD015 X-CRM114-Status: UNSURE ( 6.29 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -0.2 (/) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: From: Shahab Vahedi --- MAINTAINERS | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 943921d642ad..b6a946d24f00 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3712,6 +3712,12 @@ S: Maintained F: Documentation/devicetree/bindings/iio/imu/bos [...] Content analysis details: (-0.2 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-snps-arc@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Linux on Synopsys ARC Processors List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-snps-arc" Errors-To: linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org From: Shahab Vahedi --- MAINTAINERS | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 943921d642ad..b6a946d24f00 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3712,6 +3712,12 @@ S: Maintained F: Documentation/devicetree/bindings/iio/imu/bosch,bmi323.yaml F: drivers/iio/imu/bmi323/ +BPF JIT for ARC +M: Shahab Vahedi +L: bpf@vger.kernel.org +S: Maintained +F: arch/arc/net/ + BPF JIT for ARM M: Russell King M: Puranjay Mohan From patchwork Tue Apr 30 15:09:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shahab Vahedi X-Patchwork-Id: 1929666 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=4420QrW6; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=vahedi.org header.i=@vahedi.org header.a=rsa-sha256 header.s=key1 header.b=vqoxZEKm; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4VTNt42Fbdz20fY for ; Wed, 1 May 2024 01:10:20 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=+pU1B7fUBJwpDjqoSJ1wQsNJHNxg84WU2kywKhWJMVY=; b=4420QrW6K7YZf9 enxYtR5vSGDygBok/mflTg1hCcLqNCnhsg7tbWNgLjlM2yDWrtMpllOTVDRlwZRIDpjORXbgGDge7 K0ykW+f0VcIyLEppoUMelwzklB5T5rv8gEnmc5z59IeuUUNjuG3qpoTEAjj70kd1QKpfu8HpGkR4D iFuzDVJJ73hCJ7HtobJmeB6KwHlhJg+4jhg59gCr/Ar3YkEBM1H6hfGL7DjcUwGrs5TSqSRC53GIT 8iVFTRWdG56DAF7rRlp5m75f5Yv7p4nqf30Aipxf9L/N1Y5oGdOEVXapcF+Vi0CduIjIgK2WpQI59 Yikl+W8yxP1W7T6/tA/w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s1p7a-0000000702o-3L1f; Tue, 30 Apr 2024 15:10:18 +0000 Received: from out-184.mta0.migadu.com ([2001:41d0:1004:224b::b8]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s1p7U-00000006zw3-1dmo for linux-snps-arc@lists.infradead.org; Tue, 30 Apr 2024 15:10:15 +0000 X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vahedi.org; s=key1; t=1714489809; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SEAYaZS1OkcS1gkNxKPHoijVwXR2tngoLAovMeInUZs=; b=vqoxZEKmDX0GyqMGk+aK6jsQQLAuM7AXVbjZ6s2U9wbq38xzSyA7LuZW1+L9y9P6lGq/px 44HSNv/xNECg6HDt2OjuqAOTFJh21sOH9xO2S26K9oTJKIY9gj9bGwt1TW8cbOlskeAH54 SZKixcIISaZqOYSar6J0Navypsze6vgRYBofsBRt9FNVzg01TvPf0KpLi2TcWT4FkeFWyT rtuKx7dhNaWu/0AhRqdzbd8M3XzEunbTBeVfgIYaVKELGAYcwJDggvMiwlUQRPfe2v9SqD hAlLMlGa20C1JgkO6D4Vyy5NtYk6hJIzOBny0sPzYxIJ31d6f8bn+GP1wXb6YA== From: Shahab Vahedi To: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= Cc: Shahab Vahedi , Shahab Vahedi , Vineet Gupta , linux-snps-arc@lists.infradead.org Subject: [PATCH 5/7] v2: Fix typos Date: Tue, 30 Apr 2024 17:09:35 +0200 Message-Id: <20240430150937.39793-6-list+bpf@vahedi.org> In-Reply-To: <20240430150937.39793-1-list+bpf@vahedi.org> References: <20240430150937.39793-1-list+bpf@vahedi.org> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240430_081012_762921_D9E2C3A2 X-CRM114-Status: GOOD ( 10.04 ) X-Spam-Score: -0.2 (/) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: From: Shahab Vahedi --- arch/arc/net/bpf_jit_core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arc/net/bpf_jit_core.c b/arch/arc/net/bpf_jit_core.c index 9c0fdd514967..6692272fa1ac 100644 --- a/arch/arc/net/bpf_jit_core.c +++ b/arch/arc/net/bpf_jit_core.c @@ -9,7 +9,7 @@ #incl [...] Content analysis details: (-0.2 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-snps-arc@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Linux on Synopsys ARC Processors List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-snps-arc" Errors-To: linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org From: Shahab Vahedi --- arch/arc/net/bpf_jit_core.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arc/net/bpf_jit_core.c b/arch/arc/net/bpf_jit_core.c index 9c0fdd514967..6692272fa1ac 100644 --- a/arch/arc/net/bpf_jit_core.c +++ b/arch/arc/net/bpf_jit_core.c @@ -9,7 +9,7 @@ #include "bpf_jit.h" /* - * Check for the return value. A pattern used oftenly in this file. + * Check for the return value. A pattern used often in this file. * There must be a "ret" variable of type "int" in the scope. */ #define CHECK_RET(cmd) \ @@ -1344,7 +1344,7 @@ static struct bpf_prog *do_normal_pass(struct bpf_prog *prog) return prog; } - /* Get the lenghts and allocate buffer. */ + /* Get the lengths and allocate buffer. */ if (jit_prepare(&ctx)) { jit_ctx_cleanup(&ctx); return prog; From patchwork Tue Apr 30 15:09:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shahab Vahedi X-Patchwork-Id: 1929668 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=c+Cd8YNM; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=vahedi.org header.i=@vahedi.org header.a=rsa-sha256 header.s=key1 header.b=PAP61obu; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4VTNt60CVSz23jG for ; Wed, 1 May 2024 01:10:22 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=kGiTvuM8jDAgGvanQGnEQln/XfGm+mpzcJPmVw1WaAI=; b=c+Cd8YNMiQ5f+g lrq2RdQulj5uckXFk/sDyyZDIA2Y6QnT1AB+cskA6e7PzKeurJD1t7o7KVV9+TXXvnnu/QKe2vuUS 8KpeloNqg4DuVy5p0G0wiv0HsmMwPNpn+in35tAgN+SVwrBco+oW69oGBVqKHAfQv1sBgaQ3alguM alEL+YD06uUXIN2eIzxK0qJwcjOHDC4ZKIOjuiGj+XO59scNxQ/pERsWiFTAoX5gTeNC7kA/zGbAf 2t0mQYrdt7whsHuN+D31cn27thyEL2GOg9IvgyUEYobZ4ZT/qiISqjpohGRcoGX+eLv/+IeIdBd7f aAZjUC8T9l8a4CnVyjMA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s1p7c-0000000704x-2t1T; Tue, 30 Apr 2024 15:10:20 +0000 Received: from out-189.mta0.migadu.com ([91.218.175.189]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s1p7U-00000006zwM-1LPy for linux-snps-arc@lists.infradead.org; Tue, 30 Apr 2024 15:10:19 +0000 X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vahedi.org; s=key1; t=1714489809; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VfSnPeVCVkUGi4VI5lIsW6g4DNHbR+mnvz8KC46gLFc=; b=PAP61obuORUIIwCQU7VdC6J1sNwb2X9tOuT+/KoWAvP2joyQDKaeUnvCBaAvWv33JtwNpE SPc+5+tJdrViDWlhaLpaKgwV7G05G0tOKPV0QEgCsTeCLI4zazaqujn+xXaFfiKsP25xh+ caQHj2qP+V7gZPp3wWZjeR3GZ0LzqZWAtH5d8dd+BQ4j+SWCeMkbKEpmXSBJtzeMNGvi3s VHVnSAL69TWUcQ3f497yAGA6VmWYtoNM5UdqyE58dHp6YlfGw6BOph0E9FK2rhPVghUGbB q7oyFm7khBGC4WzQqen4fmk6qrP9LpwNbMhwp/AhuQMVid43in9OnVlRjTjCAA== From: Shahab Vahedi To: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= Cc: Shahab Vahedi , Shahab Vahedi , Vineet Gupta , linux-snps-arc@lists.infradead.org Subject: [PATCH 6/7] v2: Fix most of the "Checks" from "checkpatch.pl" Date: Tue, 30 Apr 2024 17:09:36 +0200 Message-Id: <20240430150937.39793-7-list+bpf@vahedi.org> In-Reply-To: <20240430150937.39793-1-list+bpf@vahedi.org> References: <20240430150937.39793-1-list+bpf@vahedi.org> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240430_081013_486606_50D4A204 X-CRM114-Status: GOOD ( 11.01 ) X-Spam-Score: -0.2 (/) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: From: Shahab Vahedi If they're left untouched, then it was decided like that. The command that was used for checkpatch.pl: Content analysis details: (-0.2 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-snps-arc@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Linux on Synopsys ARC Processors List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-snps-arc" Errors-To: linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org From: Shahab Vahedi If they're left untouched, then it was decided like that. The command that was used for checkpatch.pl: $ checkpatch.pl ... --strict --no-signoff \ --ignore AVOID_BUG,SPLIT_STRING,COMMIT_MESSAGE \ --git .. --- arch/arc/net/bpf_jit.h | 121 ++++++++++++++++++----------------- arch/arc/net/bpf_jit_arcv2.c | 84 ++++++++++++------------ arch/arc/net/bpf_jit_core.c | 67 ++++++++++--------- 3 files changed, 135 insertions(+), 137 deletions(-) diff --git a/arch/arc/net/bpf_jit.h b/arch/arc/net/bpf_jit.h index 9fc70d97415b..ec44873c42d1 100644 --- a/arch/arc/net/bpf_jit.h +++ b/arch/arc/net/bpf_jit.h @@ -39,75 +39,75 @@ /************** Functions that the back-end must provide **************/ /* Extension for 32-bit operations. */ -extern inline u8 zext(u8 *buf, u8 rd); +inline u8 zext(u8 *buf, u8 rd); /***** Moves *****/ -extern u8 mov_r32(u8 *buf, u8 rd, u8 rs, u8 sign_ext); -extern u8 mov_r32_i32(u8 *buf, u8 reg, s32 imm); -extern u8 mov_r64(u8 *buf, u8 rd, u8 rs, u8 sign_ext); -extern u8 mov_r64_i32(u8 *buf, u8 reg, s32 imm); -extern u8 mov_r64_i64(u8 *buf, u8 reg, u32 lo, u32 hi); +u8 mov_r32(u8 *buf, u8 rd, u8 rs, u8 sign_ext); +u8 mov_r32_i32(u8 *buf, u8 reg, s32 imm); +u8 mov_r64(u8 *buf, u8 rd, u8 rs, u8 sign_ext); +u8 mov_r64_i32(u8 *buf, u8 reg, s32 imm); +u8 mov_r64_i64(u8 *buf, u8 reg, u32 lo, u32 hi); /***** Loads and stores *****/ -extern u8 load_r(u8 *buf, u8 rd, u8 rs, s16 off, u8 size, bool sign_ext); -extern u8 store_r(u8 *buf, u8 rd, u8 rs, s16 off, u8 size); -extern u8 store_i(u8 *buf, s32 imm, u8 rd, s16 off, u8 size); +u8 load_r(u8 *buf, u8 rd, u8 rs, s16 off, u8 size, bool sign_ext); +u8 store_r(u8 *buf, u8 rd, u8 rs, s16 off, u8 size); +u8 store_i(u8 *buf, s32 imm, u8 rd, s16 off, u8 size); /***** Addition *****/ -extern u8 add_r32(u8 *buf, u8 rd, u8 rs); -extern u8 add_r32_i32(u8 *buf, u8 rd, s32 imm); -extern u8 add_r64(u8 *buf, u8 rd, u8 rs); -extern u8 add_r64_i32(u8 *buf, u8 rd, s32 imm); +u8 add_r32(u8 *buf, u8 rd, u8 rs); +u8 add_r32_i32(u8 *buf, u8 rd, s32 imm); +u8 add_r64(u8 *buf, u8 rd, u8 rs); +u8 add_r64_i32(u8 *buf, u8 rd, s32 imm); /***** Subtraction *****/ -extern u8 sub_r32(u8 *buf, u8 rd, u8 rs); -extern u8 sub_r32_i32(u8 *buf, u8 rd, s32 imm); -extern u8 sub_r64(u8 *buf, u8 rd, u8 rs); -extern u8 sub_r64_i32(u8 *buf, u8 rd, s32 imm); +u8 sub_r32(u8 *buf, u8 rd, u8 rs); +u8 sub_r32_i32(u8 *buf, u8 rd, s32 imm); +u8 sub_r64(u8 *buf, u8 rd, u8 rs); +u8 sub_r64_i32(u8 *buf, u8 rd, s32 imm); /***** Multiplication *****/ -extern u8 mul_r32(u8 *buf, u8 rd, u8 rs); -extern u8 mul_r32_i32(u8 *buf, u8 rd, s32 imm); -extern u8 mul_r64(u8 *buf, u8 rd, u8 rs); -extern u8 mul_r64_i32(u8 *buf, u8 rd, s32 imm); +u8 mul_r32(u8 *buf, u8 rd, u8 rs); +u8 mul_r32_i32(u8 *buf, u8 rd, s32 imm); +u8 mul_r64(u8 *buf, u8 rd, u8 rs); +u8 mul_r64_i32(u8 *buf, u8 rd, s32 imm); /***** Division *****/ -extern u8 div_r32(u8 *buf, u8 rd, u8 rs, bool sign_ext); -extern u8 div_r32_i32(u8 *buf, u8 rd, s32 imm, bool sign_ext); +u8 div_r32(u8 *buf, u8 rd, u8 rs, bool sign_ext); +u8 div_r32_i32(u8 *buf, u8 rd, s32 imm, bool sign_ext); /***** Remainder *****/ -extern u8 mod_r32(u8 *buf, u8 rd, u8 rs, bool sign_ext); -extern u8 mod_r32_i32(u8 *buf, u8 rd, s32 imm, bool sign_ext); +u8 mod_r32(u8 *buf, u8 rd, u8 rs, bool sign_ext); +u8 mod_r32_i32(u8 *buf, u8 rd, s32 imm, bool sign_ext); /***** Bitwise AND *****/ -extern u8 and_r32(u8 *buf, u8 rd, u8 rs); -extern u8 and_r32_i32(u8 *buf, u8 rd, s32 imm); -extern u8 and_r64(u8 *buf, u8 rd, u8 rs); -extern u8 and_r64_i32(u8 *buf, u8 rd, s32 imm); +u8 and_r32(u8 *buf, u8 rd, u8 rs); +u8 and_r32_i32(u8 *buf, u8 rd, s32 imm); +u8 and_r64(u8 *buf, u8 rd, u8 rs); +u8 and_r64_i32(u8 *buf, u8 rd, s32 imm); /***** Bitwise OR *****/ -extern u8 or_r32(u8 *buf, u8 rd, u8 rs); -extern u8 or_r32_i32(u8 *buf, u8 rd, s32 imm); -extern u8 or_r64(u8 *buf, u8 rd, u8 rs); -extern u8 or_r64_i32(u8 *buf, u8 rd, s32 imm); +u8 or_r32(u8 *buf, u8 rd, u8 rs); +u8 or_r32_i32(u8 *buf, u8 rd, s32 imm); +u8 or_r64(u8 *buf, u8 rd, u8 rs); +u8 or_r64_i32(u8 *buf, u8 rd, s32 imm); /***** Bitwise XOR *****/ -extern u8 xor_r32(u8 *buf, u8 rd, u8 rs); -extern u8 xor_r32_i32(u8 *buf, u8 rd, s32 imm); -extern u8 xor_r64(u8 *buf, u8 rd, u8 rs); -extern u8 xor_r64_i32(u8 *buf, u8 rd, s32 imm); +u8 xor_r32(u8 *buf, u8 rd, u8 rs); +u8 xor_r32_i32(u8 *buf, u8 rd, s32 imm); +u8 xor_r64(u8 *buf, u8 rd, u8 rs); +u8 xor_r64_i32(u8 *buf, u8 rd, s32 imm); /***** Bitwise Negate *****/ -extern u8 neg_r32(u8 *buf, u8 r); -extern u8 neg_r64(u8 *buf, u8 r); +u8 neg_r32(u8 *buf, u8 r); +u8 neg_r64(u8 *buf, u8 r); /***** Bitwise left shift *****/ -extern u8 lsh_r32(u8 *buf, u8 rd, u8 rs); -extern u8 lsh_r32_i32(u8 *buf, u8 rd, u8 imm); -extern u8 lsh_r64(u8 *buf, u8 rd, u8 rs); -extern u8 lsh_r64_i32(u8 *buf, u8 rd, s32 imm); +u8 lsh_r32(u8 *buf, u8 rd, u8 rs); +u8 lsh_r32_i32(u8 *buf, u8 rd, u8 imm); +u8 lsh_r64(u8 *buf, u8 rd, u8 rs); +u8 lsh_r64_i32(u8 *buf, u8 rd, s32 imm); /***** Bitwise right shift (logical) *****/ -extern u8 rsh_r32(u8 *buf, u8 rd, u8 rs); -extern u8 rsh_r32_i32(u8 *buf, u8 rd, u8 imm); -extern u8 rsh_r64(u8 *buf, u8 rd, u8 rs); -extern u8 rsh_r64_i32(u8 *buf, u8 rd, s32 imm); +u8 rsh_r32(u8 *buf, u8 rd, u8 rs); +u8 rsh_r32_i32(u8 *buf, u8 rd, u8 imm); +u8 rsh_r64(u8 *buf, u8 rd, u8 rs); +u8 rsh_r64_i32(u8 *buf, u8 rd, s32 imm); /***** Bitwise right shift (arithmetic) *****/ -extern u8 arsh_r32(u8 *buf, u8 rd, u8 rs); -extern u8 arsh_r32_i32(u8 *buf, u8 rd, u8 imm); -extern u8 arsh_r64(u8 *buf, u8 rd, u8 rs); -extern u8 arsh_r64_i32(u8 *buf, u8 rd, s32 imm); +u8 arsh_r32(u8 *buf, u8 rd, u8 rs); +u8 arsh_r32_i32(u8 *buf, u8 rd, u8 imm); +u8 arsh_r64(u8 *buf, u8 rd, u8 rs); +u8 arsh_r64_i32(u8 *buf, u8 rd, s32 imm); /***** Frame related *****/ -extern u32 mask_for_used_regs(u8 bpf_reg, bool is_call); -extern u8 arc_prologue(u8 *buf, u32 usage, u16 frame_size); -extern u8 arc_epilogue(u8 *buf, u32 usage, u16 frame_size); +u32 mask_for_used_regs(u8 bpf_reg, bool is_call); +u8 arc_prologue(u8 *buf, u32 usage, u16 frame_size); +u8 arc_epilogue(u8 *buf, u32 usage, u16 frame_size); /***** Jumps *****/ /* * Different sorts of conditions (ARC enum as opposed to BPF_*). @@ -130,6 +130,7 @@ enum ARC_CC { ARC_CC_SET, /* test */ ARC_CC_LAST }; + /* * A few notes: * @@ -144,13 +145,13 @@ enum ARC_CC { * things simpler (offsets are in the range of u32 which is more than * enough). */ -extern bool check_jmp_32(u32 curr_off, u32 targ_off, u8 cond); -extern bool check_jmp_64(u32 curr_off, u32 targ_off, u8 cond); -extern u8 gen_jmp_32(u8 *buf, u8 rd, u8 rs, u8 cond, u32 c_off, u32 t_off); -extern u8 gen_jmp_64(u8 *buf, u8 rd, u8 rs, u8 cond, u32 c_off, u32 t_off); +bool check_jmp_32(u32 curr_off, u32 targ_off, u8 cond); +bool check_jmp_64(u32 curr_off, u32 targ_off, u8 cond); +u8 gen_jmp_32(u8 *buf, u8 rd, u8 rs, u8 cond, u32 c_off, u32 t_off); +u8 gen_jmp_64(u8 *buf, u8 rd, u8 rs, u8 cond, u32 c_off, u32 t_off); /***** Miscellaneous *****/ -extern u8 gen_func_call(u8 *buf, ARC_ADDR func_addr, bool external_func); -extern u8 arc_to_bpf_return(u8 *buf); +u8 gen_func_call(u8 *buf, ARC_ADDR func_addr, bool external_func); +u8 arc_to_bpf_return(u8 *buf); /* * - Perform byte swaps on "rd" based on the "size". * - If "force" is set, do it unconditionally. Otherwise, consider the diff --git a/arch/arc/net/bpf_jit_arcv2.c b/arch/arc/net/bpf_jit_arcv2.c index 8b7ae2f11f38..31bfb6e9ce00 100644 --- a/arch/arc/net/bpf_jit_arcv2.c +++ b/arch/arc/net/bpf_jit_arcv2.c @@ -5,7 +5,7 @@ * Copyright (c) 2024 Synopsys Inc. * Author: Shahab Vahedi */ -#include +#include #include "bpf_jit.h" /* ARC core registers. */ @@ -91,7 +91,6 @@ const u8 bpf2arc[][2] = { #define REG_LO(r) (bpf2arc[(r)][0]) #define REG_HI(r) (bpf2arc[(r)][1]) - /* * To comply with ARCv2 ABI, BPF's arg5 must be put on stack. After which, * the stack needs to be restored by ARG5_SIZE. @@ -201,7 +200,7 @@ enum { * c: cccccc source */ #define OPC_MOV_CC 0x20ca0000 -#define MOV_CC_I (1 << 5) +#define MOV_CC_I BIT(5) #define OPC_MOVU_CC (OPC_MOV_CC | MOV_CC_I) /* @@ -289,7 +288,7 @@ enum { #define OPC_ADD 0x20000000 /* Addition with updating the pertinent flags in "status32" register. */ #define OPC_ADDF (OPC_ADD | FLAG(1)) -#define ADDI (1 << 22) +#define ADDI BIT(22) #define ADDI_U6(x) OP_C(x) #define OPC_ADDI (OPC_ADD | ADDI) #define OPC_ADDIF (OPC_ADDI | FLAG(1)) @@ -307,7 +306,7 @@ enum { * c: cccccc the 2nd input operand */ #define OPC_ADC 0x20010000 -#define ADCI (1 << 22) +#define ADCI BIT(22) #define ADCI_U6(x) OP_C(x) #define OPC_ADCI (OPC_ADC | ADCI) @@ -326,7 +325,7 @@ enum { #define OPC_SUB 0x20020000 /* Subtraction with updating the pertinent flags in "status32" register. */ #define OPC_SUBF (OPC_SUB | FLAG(1)) -#define SUBI (1 << 22) +#define SUBI BIT(22) #define SUBI_U6(x) OP_C(x) #define OPC_SUBI (OPC_SUB | SUBI) #define OPC_SUB_I (OPC_SUB | OP_IMM) @@ -526,7 +525,7 @@ enum { * c: cccccc amount to be shifted */ #define OPC_ASL 0x28000000 -#define ASL_I (1 << 22) +#define ASL_I BIT(22) #define ASLI_U6(x) OP_C((x) & 31) #define OPC_ASLI (OPC_ASL | ASL_I) @@ -629,13 +628,13 @@ enum { static inline void emit_2_bytes(u8 *buf, u16 bytes) { - *((u16 *) buf) = bytes; + *((u16 *)buf) = bytes; } static inline void emit_4_bytes(u8 *buf, u32 bytes) { - emit_2_bytes(buf+0, bytes >> 16); - emit_2_bytes(buf+2, bytes & 0xffff); + emit_2_bytes(buf, bytes >> 16); + emit_2_bytes(buf + 2, bytes & 0xffff); } static inline u8 bpf_to_arc_size(u8 size) @@ -686,7 +685,7 @@ static u8 arc_mov_i(u8 *buf, u8 rd, s32 imm) if (buf) { emit_4_bytes(buf, insn); - emit_4_bytes(buf+INSN_len_normal, imm); + emit_4_bytes(buf + INSN_len_normal, imm); } return INSN_len_normal + INSN_len_imm; } @@ -698,7 +697,7 @@ static u8 arc_mov_i_fixed(u8 *buf, u8 rd, s32 imm) if (buf) { emit_4_bytes(buf, insn); - emit_4_bytes(buf+INSN_len_normal, imm); + emit_4_bytes(buf + INSN_len_normal, imm); } return INSN_len_normal + INSN_len_imm; } @@ -843,7 +842,7 @@ static u8 arc_add_i(u8 *buf, u8 ra, u8 rb, s32 imm) if (buf) { emit_4_bytes(buf, insn); - emit_4_bytes(buf+INSN_len_normal, imm); + emit_4_bytes(buf + INSN_len_normal, imm); } return INSN_len_normal + INSN_len_imm; } @@ -905,7 +904,7 @@ static u8 arc_sub_i(u8 *buf, u8 ra, s32 imm) if (buf) { emit_4_bytes(buf, insn); - emit_4_bytes(buf+INSN_len_normal, imm); + emit_4_bytes(buf + INSN_len_normal, imm); } return INSN_len_normal + INSN_len_imm; } @@ -974,7 +973,7 @@ static u8 arc_mpy_i(u8 *buf, u8 ra, u8 rb, s32 imm) if (buf) { emit_4_bytes(buf, insn); - emit_4_bytes(buf+INSN_len_normal, imm); + emit_4_bytes(buf + INSN_len_normal, imm); } return INSN_len_normal + INSN_len_imm; } @@ -996,7 +995,7 @@ static u8 arc_mpydu_i(u8 *buf, u8 ra, s32 imm) if (buf) { emit_4_bytes(buf, insn); - emit_4_bytes(buf+INSN_len_normal, imm); + emit_4_bytes(buf + INSN_len_normal, imm); } return INSN_len_normal + INSN_len_imm; } @@ -1018,7 +1017,7 @@ static u8 arc_divu_i(u8 *buf, u8 rd, s32 imm) if (buf) { emit_4_bytes(buf, insn); - emit_4_bytes(buf+INSN_len_normal, imm); + emit_4_bytes(buf + INSN_len_normal, imm); } return INSN_len_normal + INSN_len_imm; } @@ -1040,7 +1039,7 @@ static u8 arc_divs_i(u8 *buf, u8 rd, s32 imm) if (buf) { emit_4_bytes(buf, insn); - emit_4_bytes(buf+INSN_len_normal, imm); + emit_4_bytes(buf + INSN_len_normal, imm); } return INSN_len_normal + INSN_len_imm; } @@ -1062,7 +1061,7 @@ static u8 arc_remu_i(u8 *buf, u8 rd, s32 imm) if (buf) { emit_4_bytes(buf, insn); - emit_4_bytes(buf+INSN_len_normal, imm); + emit_4_bytes(buf + INSN_len_normal, imm); } return INSN_len_normal + INSN_len_imm; } @@ -1084,7 +1083,7 @@ static u8 arc_rems_i(u8 *buf, u8 rd, s32 imm) if (buf) { emit_4_bytes(buf, insn); - emit_4_bytes(buf+INSN_len_normal, imm); + emit_4_bytes(buf + INSN_len_normal, imm); } return INSN_len_normal + INSN_len_imm; } @@ -1106,7 +1105,7 @@ static u8 arc_and_i(u8 *buf, u8 rd, s32 imm) if (buf) { emit_4_bytes(buf, insn); - emit_4_bytes(buf+INSN_len_normal, imm); + emit_4_bytes(buf + INSN_len_normal, imm); } return INSN_len_normal + INSN_len_imm; } @@ -1151,7 +1150,7 @@ static u8 arc_or_i(u8 *buf, u8 rd, s32 imm) if (buf) { emit_4_bytes(buf, insn); - emit_4_bytes(buf+INSN_len_normal, imm); + emit_4_bytes(buf + INSN_len_normal, imm); } return INSN_len_normal + INSN_len_imm; } @@ -1171,7 +1170,7 @@ static u8 arc_xor_i(u8 *buf, u8 rd, s32 imm) if (buf) { emit_4_bytes(buf, insn); - emit_4_bytes(buf+INSN_len_normal, imm); + emit_4_bytes(buf + INSN_len_normal, imm); } return INSN_len_normal + INSN_len_imm; } @@ -1449,7 +1448,7 @@ static u8 adjust_mem_access(u8 *buf, s16 *off, u8 size, if (!IN_S9_RANGE(*off) || (size == BPF_DW && !IN_S9_RANGE(*off + 4))) { len += arc_add_i(BUF(buf, len), - REG_LO(JIT_REG_TMP), REG_LO(rm), (u32) (*off)); + REG_LO(JIT_REG_TMP), REG_LO(rm), (u32)(*off)); *arc_reg_mem = REG_LO(JIT_REG_TMP); *off = 0; } @@ -1468,7 +1467,7 @@ u8 store_r(u8 *buf, u8 rs, u8 rd, s16 off, u8 size) len += arc_st_r(BUF(buf, len), REG_LO(rs), arc_reg_mem, off, ZZ_4_byte); len += arc_st_r(BUF(buf, len), REG_HI(rs), arc_reg_mem, - off+4, ZZ_4_byte); + off + 4, ZZ_4_byte); } else { u8 zz = bpf_to_arc_size(size); @@ -1504,7 +1503,7 @@ u8 store_i(u8 *buf, s32 imm, u8 rd, s16 off, u8 size) imm = (imm >= 0 ? 0 : -1); len += arc_mov_i(BUF(buf, len), arc_rs, imm); len += arc_st_r(BUF(buf, len), arc_rs, arc_reg_mem, - off+4, ZZ_4_byte); + off + 4, ZZ_4_byte); } else { u8 zz = bpf_to_arc_size(size); @@ -1579,14 +1578,14 @@ u8 load_r(u8 *buf, u8 rd, u8 rs, s16 off, u8 size, bool sign_ext) */ if (REG_LO(rd) != arc_reg_mem) { len += arc_ld_r(BUF(buf, len), REG_LO(rd), arc_reg_mem, - off+0, ZZ_4_byte); + off, ZZ_4_byte); len += arc_ld_r(BUF(buf, len), REG_HI(rd), arc_reg_mem, - off+4, ZZ_4_byte); + off + 4, ZZ_4_byte); } else { len += arc_ld_r(BUF(buf, len), REG_HI(rd), arc_reg_mem, - off+4, ZZ_4_byte); + off + 4, ZZ_4_byte); len += arc_ld_r(BUF(buf, len), REG_LO(rd), arc_reg_mem, - off+0, ZZ_4_byte); + off, ZZ_4_byte); } } @@ -1984,7 +1983,7 @@ u8 lsh_r64_i32(u8 *buf, u8 rd, s32 imm) const u8 t0 = REG_LO(JIT_REG_TMP); const u8 B_lo = REG_LO(rd); const u8 B_hi = REG_HI(rd); - const u8 n = (u8) imm; + const u8 n = (u8)imm; u8 len = 0; if (n == 0) { @@ -2079,7 +2078,7 @@ u8 rsh_r64_i32(u8 *buf, u8 rd, s32 imm) const u8 t0 = REG_LO(JIT_REG_TMP); const u8 B_lo = REG_LO(rd); const u8 B_hi = REG_HI(rd); - const u8 n = (u8) imm; + const u8 n = (u8)imm; u8 len = 0; if (n == 0) { @@ -2177,7 +2176,7 @@ u8 arsh_r64_i32(u8 *buf, u8 rd, s32 imm) const u8 t0 = REG_LO(JIT_REG_TMP); const u8 B_lo = REG_LO(rd); const u8 B_hi = REG_HI(rd); - const u8 n = (u8) imm; + const u8 n = (u8)imm; u8 len = 0; if (n == 0) { @@ -2418,14 +2417,14 @@ u8 arc_prologue(u8 *buf, u32 usage, u16 frame_size) } /* Deal with fp last. */ - if ((usage & BIT(ARC_R_FP)) || (frame_size > 0)) + if ((usage & BIT(ARC_R_FP)) || frame_size > 0) len += arc_push_r(BUF(buf, len), ARC_R_FP); if (frame_size > 0) len += frame_create(BUF(buf, len), frame_size); #ifdef ARC_BPF_JIT_DEBUG - if ((usage & BIT(ARC_R_FP)) && (frame_size == 0)) { + if ((usage & BIT(ARC_R_FP)) && frame_size == 0) { pr_err("FP is being saved while there is no frame."); BUG(); } @@ -2452,7 +2451,7 @@ u8 arc_epilogue(u8 *buf, u32 usage, u16 frame_size) u32 gp_regs = 0; #ifdef ARC_BPF_JIT_DEBUG - if ((usage & BIT(ARC_R_FP)) && (frame_size == 0)) { + if ((usage & BIT(ARC_R_FP)) && frame_size == 0) { pr_err("FP is being saved while there is no frame."); BUG(); } @@ -2462,7 +2461,7 @@ u8 arc_epilogue(u8 *buf, u32 usage, u16 frame_size) len += frame_restore(BUF(buf, len)); /* Deal with fp first. */ - if ((usage & BIT(ARC_R_FP)) || (frame_size > 0)) + if ((usage & BIT(ARC_R_FP)) || frame_size > 0) len += arc_pop_r(BUF(buf, len), ARC_R_FP); gp_regs = usage & ~(BIT(ARC_R_BLINK) | BIT(ARC_R_FP)); @@ -2533,12 +2532,12 @@ const struct { struct { u8 cond[JCC64_NR_OF_JMPS]; - } jmp[ARC_CC_SLE+1]; + } jmp[ARC_CC_SLE + 1]; } arcv2_64_jccs = { .jit_off = { - INSN_len_normal*1, - INSN_len_normal*2, - INSN_len_normal*4 + INSN_len_normal * 1, + INSN_len_normal * 2, + INSN_len_normal * 4 }, /* * cmp rd_hi, rs_hi @@ -2639,7 +2638,7 @@ const struct { */ static inline s32 get_displacement(u32 curr_off, u32 targ_off) { - return (s32) (targ_off - (curr_off & ~3L)); + return (s32)(targ_off - (curr_off & ~3L)); } /* @@ -2704,7 +2703,6 @@ static u8 gen_jset_64(u8 *buf, u8 rd, u8 rs, u32 curr_off, u32 targ_off) return len; } - /* * Verify if all the jumps for a JITed jcc64 operation are valid, * by consulting the data stored at "arcv2_64_jccs". diff --git a/arch/arc/net/bpf_jit_core.c b/arch/arc/net/bpf_jit_core.c index 6692272fa1ac..00c99b339b4a 100644 --- a/arch/arc/net/bpf_jit_core.c +++ b/arch/arc/net/bpf_jit_core.c @@ -5,7 +5,7 @@ * Copyright (c) 2024 Synopsys Inc. * Author: Shahab Vahedi */ -#include +#include #include "bpf_jit.h" /* @@ -30,18 +30,18 @@ static void dump_bytes(const u8 *buf, u32 len, const char *header) for (i = 0, j = 0; i < len; i++) { /* Last input byte? */ - if (i == len-1) { - j += scnprintf(line+j, 64-j, "0x%02x", buf[i]); + if (i == len - 1) { + j += scnprintf(line + j, 64 - j, "0x%02x", buf[i]); pr_info("%s\n", line); break; } /* End of line? */ else if (i % 8 == 7) { - j += scnprintf(line+j, 64-j, "0x%02x", buf[i]); + j += scnprintf(line + j, 64 - j, "0x%02x", buf[i]); pr_info("%s\n", line); j = 0; } else { - j += scnprintf(line+j, 64-j, "0x%02x, ", buf[i]); + j += scnprintf(line + j, 64 - j, "0x%02x, ", buf[i]); } } } @@ -126,7 +126,7 @@ static void vm_dump(const struct bpf_prog *prog) { #ifdef ARC_BPF_JIT_DEBUG if (bpf_jit_enable > 1) - dump_bytes((u8 *) prog->insns, 8*prog->len, " VM "); + dump_bytes((u8 *)prog->insns, 8 * prog->len, " VM "); #endif } @@ -222,8 +222,8 @@ static void jit_ctx_cleanup(struct jit_context *ctx) bpf_jit_prog_release_other(ctx->orig_prog, ctx->prog); } - maybe_free(ctx, (void **) &ctx->bpf2insn); - maybe_free(ctx, (void **) &ctx->jit_data); + maybe_free(ctx, (void **)&ctx->bpf2insn); + maybe_free(ctx, (void **)&ctx->jit_data); if (!ctx->bpf2insn) ctx->bpf2insn_valid = false; @@ -267,8 +267,8 @@ static void analyze_reg_usage(struct jit_context *ctx) /* Verify that no instruction will be emitted when there is no buffer. */ static inline int jit_buffer_check(const struct jit_context *ctx) { - if (ctx->emit == true) { - if (ctx->jit.buf == NULL) { + if (ctx->emit) { + if (!ctx->jit.buf) { pr_err("bpf-jit: inconsistence state; no " "buffer to emit instructions.\n"); return -EINVAL; @@ -333,7 +333,6 @@ static inline s32 get_index_for_insn(const struct jit_context *ctx, return (insn - ctx->prog->insnsi); } - /* * In most of the cases, the "offset" is read from "insn->off". However, * if it is an unconditional BPF_JMP32, then it comes from "insn->imm". @@ -608,7 +607,7 @@ static int handle_jumps(const struct jit_context *ctx, * (curr_off) will have increased to a point where the necessary * instructions can be inserted by "gen_jmp_{32,64}()". */ - if (has_imm(insn) && (cond != ARC_CC_AL)) { + if (has_imm(insn) && cond != ARC_CC_AL) { if (j32) { *len += mov_r32_i32(BUF(buf, *len), JIT_REG_TMP, insn->imm); @@ -685,7 +684,7 @@ static int handle_call(struct jit_context *ctx, if (!fixed && !addr) set_need_for_extra_pass(ctx); - *len = gen_func_call(buf, (ARC_ADDR) addr, in_kernel_func); + *len = gen_func_call(buf, (ARC_ADDR)addr, in_kernel_func); if (insn->src_reg != BPF_PSEUDO_CALL) { /* Assigning ABI's return reg to JIT's return reg. */ @@ -714,7 +713,7 @@ static int handle_ld_imm64(struct jit_context *ctx, return -EINVAL; } - *len = mov_r64_i64(buf, insn->dst_reg, insn->imm, (insn+1)->imm); + *len = mov_r64_i64(buf, insn->dst_reg, insn->imm, (insn + 1)->imm); if (bpf_pseudo_func(insn)) set_need_for_extra_pass(ctx); @@ -841,7 +840,7 @@ static int handle_insn(struct jit_context *ctx, u32 idx) break; /* dst = src (32-bit) */ case BPF_ALU | BPF_MOV | BPF_X: - len = mov_r32(buf, dst, src, (u8) off); + len = mov_r32(buf, dst, src, (u8)off); break; /* dst = imm32 (32-bit) */ case BPF_ALU | BPF_MOV | BPF_K: @@ -934,7 +933,7 @@ static int handle_insn(struct jit_context *ctx, u32 idx) break; /* dst = src (64-bit) */ case BPF_ALU64 | BPF_MOV | BPF_X: - len = mov_r64(buf, dst, src, (u8) off); + len = mov_r64(buf, dst, src, (u8)off); break; /* dst = imm32 (sign extend to 64-bit) */ case BPF_ALU64 | BPF_MOV | BPF_K: @@ -1074,7 +1073,7 @@ static int handle_body(struct jit_context *ctx) CHECK_RET(handle_insn(ctx, i)); if (ret > 0) { /* "ret" is 1 if two (64-bit) chunks were consumed. */ - ctx->bpf2insn[i+1] = ctx->bpf2insn[i]; + ctx->bpf2insn[i + 1] = ctx->bpf2insn[i]; i++; } } @@ -1103,7 +1102,7 @@ static void fill_ill_insn(void *area, unsigned int size) const u16 unimp_s = 0x79e0; if (size & 1) { - *((u8 *) area + (size - 1)) = 0xff; + *((u8 *)area + (size - 1)) = 0xff; size -= 1; } @@ -1141,8 +1140,7 @@ static int jit_prepare_final_mem_alloc(struct jit_context *ctx) } if (ctx->need_extra_pass) { - ctx->jit_data = kzalloc(sizeof(struct arc_jit_data), - GFP_KERNEL); + ctx->jit_data = kzalloc(sizeof(*ctx->jit_data), GFP_KERNEL); if (!ctx->jit_data) return -ENOMEM; } @@ -1224,23 +1222,23 @@ static void jit_finalize(struct jit_context *ctx) { struct bpf_prog *prog = ctx->prog; - ctx->success = true; - prog->bpf_func = (void *) ctx->jit.buf; + ctx->success = true; + prog->bpf_func = (void *)ctx->jit.buf; prog->jited_len = ctx->jit.len; - prog->jited = 1; + prog->jited = 1; /* We're going to need this information for the "do_extra_pass()". */ if (ctx->need_extra_pass) { ctx->jit_data->bpf_header = ctx->bpf_header; - ctx->jit_data->bpf2insn = ctx->bpf2insn; - prog->aux->jit_data = (void *) ctx->jit_data; + ctx->jit_data->bpf2insn = ctx->bpf2insn; + prog->aux->jit_data = (void *)ctx->jit_data; } else { /* * If things seem finalised, then mark the JITed memory * as R-X and flush it. */ bpf_jit_binary_lock_ro(ctx->bpf_header); - flush_icache_range((unsigned long) ctx->bpf_header, + flush_icache_range((unsigned long)ctx->bpf_header, (unsigned long) BUF(ctx->jit.buf, ctx->jit.len)); prog->aux->jit_data = NULL; @@ -1258,30 +1256,31 @@ static void jit_finalize(struct jit_context *ctx) */ static inline int check_jit_context(const struct bpf_prog *prog) { - if (prog->aux->jit_data == NULL) { + if (!prog->aux->jit_data) { pr_notice("bpf-jit: no jit data for the extra pass.\n"); return 1; - } else + } else { return 0; + } } /* Reuse the previous pass's data. */ static int jit_resume_context(struct jit_context *ctx) { struct arc_jit_data *jdata = - (struct arc_jit_data *) ctx->prog->aux->jit_data; + (struct arc_jit_data *)ctx->prog->aux->jit_data; if (!jdata) { pr_err("bpf-jit: no jit data for the extra pass.\n"); return -EINVAL; } - ctx->jit.buf = (u8 *) ctx->prog->bpf_func; - ctx->jit.len = ctx->prog->jited_len; - ctx->bpf_header = jdata->bpf_header; - ctx->bpf2insn = (u32 *) jdata->bpf2insn; + ctx->jit.buf = (u8 *)ctx->prog->bpf_func; + ctx->jit.len = ctx->prog->jited_len; + ctx->bpf_header = jdata->bpf_header; + ctx->bpf2insn = (u32 *)jdata->bpf2insn; ctx->bpf2insn_valid = ctx->bpf2insn ? true : false; - ctx->jit_data = jdata; + ctx->jit_data = jdata; return 0; } From patchwork Tue Apr 30 15:09:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shahab Vahedi X-Patchwork-Id: 1929667 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=sU7p0Q1m; dkim=fail reason="signature verification failed" (2048-bit key; secure) header.d=vahedi.org header.i=@vahedi.org header.a=rsa-sha256 header.s=key1 header.b=QEwgkc3q; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4VTNt538G3z20fY for ; Wed, 1 May 2024 01:10:21 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=vC7buN5h2jwx6RQoA/jXBZ7e/LZjzGDi+wpMCv6K1k4=; b=sU7p0Q1mxkMD/u 4zRsLYqRR2z8UexzfgEKiG5Zgp2XVH0ihbIVy0Y+ZxzKv3jSd8RoWXwDJVjpNpizHl7cSdo+znYpX 0Gb9ZRs3Jq9CfDOYB+ChAXoGfqkhB4apZYkymyDTOnirqFkBd10MAp4CSd6eI67wvc3Q8XFHN/69G Op7Uq15ILIurEq8bXmaR0B0qHAwPyk+7uYn5n1l/KIXyhHO33xrXaPBbk4sdBVQpAcNDs9Qb70WhN vhwLClOLIR/PyP8bAYTA7HAz8iQnmrkSkAWO97puz1u/ttZphblD5HxJ96Q9GvtL986Dy1y5FMy07 M0prMx4Rywz+tFnPSWog==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1s1p7b-0000000704O-43ur; Tue, 30 Apr 2024 15:10:20 +0000 Received: from out-183.mta0.migadu.com ([91.218.175.183]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1s1p7V-00000006zxK-0Ooj for linux-snps-arc@lists.infradead.org; Tue, 30 Apr 2024 15:10:16 +0000 X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vahedi.org; s=key1; t=1714489809; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yP0Tj5WBarB+Jyjo4h0BnOQOGOKx2J+3QabMD2B1BrQ=; b=QEwgkc3qoAA+Bnkn6MVF9AT4ZYKaB78ngx3BNYwilajg+qN3GyH9niKR0E04Fp+rGNvczJ 9iR5iAs28FstVW+xaROfiZtDjD+m8cDkQjrcLumm8BHMVXYwr/PYamMWYUxtzSTFtNRRBO v1ee1CFNJuKdyYKrTmhBrrDFWtGwR3lbET0ZWmog+iW8UhYQr24g5KLb/48W0MqO7xkz8J qZhUkXN7XU+7deAoSQZnUSrnCpnYWSd+TWvYLSrnUTZTI4IaDzvX6Yhtg12OGdlPp8kEgO UQNspqGKiDkCrbdCBG5FytSN3RHCUmQVdQy+SQO18mdcoHz4Xp5Hboy3HDetiA== From: Shahab Vahedi To: =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= Cc: Shahab Vahedi , Shahab Vahedi , Vineet Gupta , linux-snps-arc@lists.infradead.org Subject: [PATCH 7/7] v2: Check "bpf_jit_binary_lock_ro()" return value Date: Tue, 30 Apr 2024 17:09:37 +0200 Message-Id: <20240430150937.39793-8-list+bpf@vahedi.org> In-Reply-To: <20240430150937.39793-1-list+bpf@vahedi.org> References: <20240430150937.39793-1-list+bpf@vahedi.org> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240430_081013_582364_3A1BBAFB X-CRM114-Status: UNSURE ( 9.85 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -0.2 (/) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: From: Shahab Vahedi ...after the rebase. --- arch/arc/net/bpf_jit_core.c | 29 ++++++++++++++++++++--------- 1 file changed, 20 insertions(+), 9 deletions(-) diff --git a/arch/arc/net/bpf_jit_core.c b/arch/arc/net/bpf_jit_core.c index 00c99b339b4a..6f6b4ffccf2c 100644 --- a/arch/arc/net/bpf_jit_core.c +++ b/arch/arc/net/bpf_jit_core.c @@ -1218,15 +1218,10 [...] Content analysis details: (-0.2 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain X-BeenThere: linux-snps-arc@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Linux on Synopsys ARC Processors List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-snps-arc" Errors-To: linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org From: Shahab Vahedi ...after the rebase. --- arch/arc/net/bpf_jit_core.c | 29 ++++++++++++++++++++--------- 1 file changed, 20 insertions(+), 9 deletions(-) diff --git a/arch/arc/net/bpf_jit_core.c b/arch/arc/net/bpf_jit_core.c index 00c99b339b4a..6f6b4ffccf2c 100644 --- a/arch/arc/net/bpf_jit_core.c +++ b/arch/arc/net/bpf_jit_core.c @@ -1218,15 +1218,10 @@ static int jit_compile(struct jit_context *ctx) * * prog->jited=1, prog->jited_len=..., prog->bpf_func=... */ -static void jit_finalize(struct jit_context *ctx) +static int jit_finalize(struct jit_context *ctx) { struct bpf_prog *prog = ctx->prog; - ctx->success = true; - prog->bpf_func = (void *)ctx->jit.buf; - prog->jited_len = ctx->jit.len; - prog->jited = 1; - /* We're going to need this information for the "do_extra_pass()". */ if (ctx->need_extra_pass) { ctx->jit_data->bpf_header = ctx->bpf_header; @@ -1237,7 +1232,10 @@ static void jit_finalize(struct jit_context *ctx) * If things seem finalised, then mark the JITed memory * as R-X and flush it. */ - bpf_jit_binary_lock_ro(ctx->bpf_header); + if (bpf_jit_binary_lock_ro(ctx->bpf_header)) { + pr_err("bpf-jit: Could not lock the JIT memory.\n"); + return -EFAULT; + } flush_icache_range((unsigned long)ctx->bpf_header, (unsigned long) BUF(ctx->jit.buf, ctx->jit.len)); @@ -1245,8 +1243,15 @@ static void jit_finalize(struct jit_context *ctx) bpf_prog_fill_jited_linfo(prog, ctx->bpf2insn); } + ctx->success = true; + prog->bpf_func = (void *)ctx->jit.buf; + prog->jited_len = ctx->jit.len; + prog->jited = 1; + jit_ctx_cleanup(ctx); jit_dump(ctx); + + return 0; } /* @@ -1354,7 +1359,10 @@ static struct bpf_prog *do_normal_pass(struct bpf_prog *prog) return prog; } - jit_finalize(&ctx); + if (jit_finalize(&ctx)) { + jit_ctx_cleanup(&ctx); + return prog; + } return ctx.prog; } @@ -1389,7 +1397,10 @@ static struct bpf_prog *do_extra_pass(struct bpf_prog *prog) return prog; } - jit_finalize(&ctx); + if (jit_finalize(&ctx)) { + jit_ctx_cleanup(&ctx); + return prog; + } return ctx.prog; }