From patchwork Mon Dec 25 06:25:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: joshua X-Patchwork-Id: 1880069 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Sz7GM71brz1ydb for ; Mon, 25 Dec 2023 17:26:35 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id B83FC3858294 for ; Mon, 25 Dec 2023 06:26:32 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from out30-131.freemail.mail.aliyun.com (out30-131.freemail.mail.aliyun.com [115.124.30.131]) by sourceware.org (Postfix) with ESMTPS id AD6003858C20 for ; Mon, 25 Dec 2023 06:26:04 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org AD6003858C20 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=linux.alibaba.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org AD6003858C20 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=115.124.30.131 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703485568; cv=none; b=KQu22mNsw7wtVHJ1QMOHfqKqIoszc2OkbR4o/xSvg1O/0z40u6kTpr1LnyUzxENfdWuEc2VvP7A/99M9N5FChlK+bP5TE1idIyAOhvanVHmUZP22/s2c64L24B8G+G1+QPIy2am5rtE9i7/GEN4o8bRLmN3UuPvSTUHUupQrFyY= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703485568; c=relaxed/simple; bh=j81uEqI/Wn9IEYRlrkRYqHQZ0KVoMj/F5xtjhjR9eLw=; h=From:To:Subject:Date:Message-Id:MIME-Version; b=ew3Dg0nz4JZkr0KB1d14sRQV6m2z932mK9z0oQV4v7xNuowBoLlTb6wYWPyCUyOuJCVguZa0fnSOLyao2Z19MCk5mKvLxEMWwFso9LMWnHnxcLIJjvZx/NWW1nHw6Bn5w0yOpSiRx82ICMaRwUuJmb7kDJm9TuK+E9ASienU7o0= ARC-Authentication-Results: i=1; server2.sourceware.org X-Alimail-AntiSpam: AC=PASS; BC=-1|-1; BR=01201311R201e4; CH=green; DM=||false|; DS=||; FP=0|-1|-1|-1|0|-1|-1|-1; HT=ay29a033018045168; MF=cooper.joshua@linux.alibaba.com; NM=1; PH=DS; RN=11; SR=0; TI=SMTPD_---0Vz7E-LG_1703485557; Received: from localhost.localdomain(mailfrom:cooper.joshua@linux.alibaba.com fp:SMTPD_---0Vz7E-LG_1703485557) by smtp.aliyun-inc.com; Mon, 25 Dec 2023 14:25:59 +0800 From: "Jun Sha (Joshua)" To: gcc-patches@gcc.gnu.org Cc: jim.wilson.gcc@gmail.com, palmer@dabbelt.com, andrew@sifive.com, philipp.tomsich@vrull.eu, jeffreyalaw@gmail.com, christoph.muellner@vrull.eu, juzhe.zhong@rivai.ai, "Jun Sha (Joshua)" , Jin Ma , Xianmiao Qu Subject: [PATCH v4 4/6] RISC-V: Adds the prefix "th." for the instructions of XTheadVector. Date: Mon, 25 Dec 2023 14:25:50 +0800 Message-Id: <20231225062550.714-1-cooper.joshua@linux.alibaba.com> X-Mailer: git-send-email 2.27.0.windows.1 In-Reply-To: <20231220123249.555-1-cooper.joshua@linux.alibaba.com> References: <20231220123249.555-1-cooper.joshua@linux.alibaba.com> MIME-Version: 1.0 X-Spam-Status: No, score=-20.4 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY, USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org This patch adds th. prefix to all XTheadVector instructions by implementing new assembly output functions. In this version, we follow Kito's suggestions and only check the prefix is 'v', so that no extra attribute is needed. gcc/ChangeLog: * config/riscv/riscv-protos.h (riscv_asm_output_opcode): New function to add assembler insn code prefix/suffix. * config/riscv/riscv.cc (riscv_asm_output_opcode): Likewise. * config/riscv/riscv.h (ASM_OUTPUT_OPCODE): Likewise. Co-authored-by: Jin Ma Co-authored-by: Xianmiao Qu Co-authored-by: Christoph Müllner --- gcc/config/riscv/riscv-protos.h | 1 + gcc/config/riscv/riscv.cc | 19 +++++++++++++++++++ gcc/config/riscv/riscv.h | 4 ++++ .../riscv/rvv/xtheadvector/prefix.c | 12 ++++++++++++ 4 files changed, 36 insertions(+) create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h index 31049ef7523..5ea54b45703 100644 --- a/gcc/config/riscv/riscv-protos.h +++ b/gcc/config/riscv/riscv-protos.h @@ -102,6 +102,7 @@ struct riscv_address_info { }; /* Routines implemented in riscv.cc. */ +extern const char *riscv_asm_output_opcode (FILE *asm_out_file, const char *p); extern enum riscv_symbol_type riscv_classify_symbolic_expression (rtx); extern bool riscv_symbolic_constant_p (rtx, enum riscv_symbol_type *); extern int riscv_float_const_rtx_index_for_fli (rtx); diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc index 0d1cbc5cb5f..30e6ced5f3f 100644 --- a/gcc/config/riscv/riscv.cc +++ b/gcc/config/riscv/riscv.cc @@ -5636,6 +5636,25 @@ riscv_get_v_regno_alignment (machine_mode mode) return lmul; } +/* Define ASM_OUTPUT_OPCODE to do anything special before + emitting an opcode. */ +const char * +riscv_asm_output_opcode (FILE *asm_out_file, const char *p) +{ + if (!TARGET_XTHEADVECTOR) + return p; + + if (current_output_insn == NULL_RTX) + return p; + + /* We need to add th. prefix to all the xtheadvector + insturctions here.*/ + if (p[0] == 'v') + fputs ("th.", asm_out_file); + + return p; +} + /* Implement TARGET_PRINT_OPERAND. The RISCV-specific operand codes are: 'h' Print the high-part relocation associated with OP, after stripping diff --git a/gcc/config/riscv/riscv.h b/gcc/config/riscv/riscv.h index 6df9ec73c5e..c33361a254d 100644 --- a/gcc/config/riscv/riscv.h +++ b/gcc/config/riscv/riscv.h @@ -826,6 +826,10 @@ extern enum riscv_cc get_riscv_cc (const rtx use); asm_fprintf ((FILE), "%U%s", (NAME)); \ } while (0) +#undef ASM_OUTPUT_OPCODE +#define ASM_OUTPUT_OPCODE(STREAM, PTR) \ + (PTR) = riscv_asm_output_opcode(STREAM, PTR) + #define JUMP_TABLES_IN_TEXT_SECTION 0 #define CASE_VECTOR_MODE SImode #define CASE_VECTOR_PC_RELATIVE (riscv_cmodel != CM_MEDLOW) diff --git a/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c new file mode 100644 index 00000000000..48867f4ddfb --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/xtheadvector/prefix.c @@ -0,0 +1,12 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gc_xtheadvector -mabi=ilp32 -O0" } */ + +#include "riscv_vector.h" + +vint32m1_t +prefix (vint32m1_t vx, vint32m1_t vy, size_t vl) +{ + return __riscv_vadd_vv_i32m1 (vx, vy, vl); +} + +/* { dg-final { scan-assembler {\mth\.v\M} } } */ \ No newline at end of file From patchwork Mon Dec 25 06:29:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: joshua X-Patchwork-Id: 1880070 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=8.43.85.97; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Sz7Lp0lnVz20RL for ; Mon, 25 Dec 2023 17:30:26 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id C9377385843B for ; Mon, 25 Dec 2023 06:30:23 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from out30-97.freemail.mail.aliyun.com (out30-97.freemail.mail.aliyun.com [115.124.30.97]) by sourceware.org (Postfix) with ESMTPS id C594F3858C98 for ; Mon, 25 Dec 2023 06:29:51 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org C594F3858C98 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=linux.alibaba.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org C594F3858C98 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=115.124.30.97 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703485799; cv=none; b=CLdg/TmZAEmYdEZaHLXytpbRjR1cICaHG48W60009oiM5k+w80M/YxTbBwBUCAJdsyFwkl0QvRKyeRSqHQ/gs665d22UkO8XH1OaGlSGX021Ez4sqhL9tjuYx7ruu7Phz3EEdl4jgiBxx0FCBZlGiZz0bVNmMylwGDTJRslfXq4= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703485799; c=relaxed/simple; bh=2kERuFOLHRn8cPCXjrum21MgxNIxY3v3KGfw9N/0UTs=; h=From:To:Subject:Date:Message-Id:MIME-Version; b=dkjHbuLl9zSKDyNS7xUxUDzeL7F9MCWdBJ+qosoDu1BVLrFofvADYYztHZGdd7OkLpkwJXxsuLN6OipiPUcFysfy4c2BMDy+tCAYZuD5wNnRA794OpFCtUfsRabaoaS/xQDQ+Tsh3CuyfZrwxTVxmugP9vSmSkVm+xLGYLFKrS8= ARC-Authentication-Results: i=1; server2.sourceware.org X-Alimail-AntiSpam: AC=PASS; BC=-1|-1; BR=01201311R161e4; CH=green; DM=||false|; DS=||; FP=0|-1|-1|-1|0|-1|-1|-1; HT=ay29a033018046059; MF=cooper.joshua@linux.alibaba.com; NM=1; PH=DS; RN=11; SR=0; TI=SMTPD_---0Vz7VQsx_1703485784; Received: from localhost.localdomain(mailfrom:cooper.joshua@linux.alibaba.com fp:SMTPD_---0Vz7VQsx_1703485784) by smtp.aliyun-inc.com; Mon, 25 Dec 2023 14:29:47 +0800 From: "Jun Sha (Joshua)" To: gcc-patches@gcc.gnu.org Cc: jim.wilson.gcc@gmail.com, palmer@dabbelt.com, andrew@sifive.com, philipp.tomsich@vrull.eu, jeffreyalaw@gmail.com, christoph.muellner@vrull.eu, juzhe.zhong@rivai.ai, "Jun Sha (Joshua)" , Jin Ma , Xianmiao Qu Subject: [PATCH v4 5/6] RISC-V: Handle differences between XTheadvector and Vector Date: Mon, 25 Dec 2023 14:29:39 +0800 Message-Id: <20231225062939.767-1-cooper.joshua@linux.alibaba.com> X-Mailer: git-send-email 2.27.0.windows.1 In-Reply-To: <20231220123419.608-1-cooper.joshua@linux.alibaba.com> References: <20231220123419.608-1-cooper.joshua@linux.alibaba.com> MIME-Version: 1.0 X-Spam-Status: No, score=-20.1 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY, UPPERCASE_50_75, URIBL_BLACK, USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org This patch is to handle the differences in instruction generation between Vector and XTheadVector. In this version, we only support partial xtheadvector instructions that leverage directly from current RVV1.0 with simple adding "th." prefix. For different name xtheadvector instructions but share same patterns as RVV1.0 instructions, we will use ASM targethook to rewrite the whole string of the instructions in the following patches. For some vector patterns that cannot be avoided, we use "!TARGET_XTHEADVECTOR" to disable them in vector.md in order not to generate instructions that xtheadvector does not support, like vmv1r and vsext.vf2. gcc/ChangeLog: * config.gcc: Add files for XTheadVector intrinsics. * config/riscv/autovec.md: Guard XTheadVector. * config/riscv/riscv-string.cc (expand_block_move): Guard XTheadVector. * config/riscv/riscv-v.cc (legitimize_move): New expansion. (get_prefer_tail_policy): Give specific value for tail. (get_prefer_mask_policy): Give specific value for mask. (vls_mode_valid_p): Avoid autovec. * config/riscv/riscv-vector-builtins-shapes.cc (check_type): (build_one): New function. * config/riscv/riscv-vector-builtins.cc (DEF_RVV_FUNCTION): (DEF_THEAD_RVV_FUNCTION): Add new marcos. (check_required_extensions): (handle_pragma_vector): * config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_VECTOR): (RVV_REQUIRE_XTHEADVECTOR): Add RVV_REQUIRE_VECTOR and RVV_REQUIRE_XTHEADVECTOR. (struct function_group_info): * config/riscv/riscv-vector-switch.def (ENTRY): Disable fractional mode for the XTheadVector extension. (TUPLE_ENTRY): Likewise. * config/riscv/riscv-vsetvl.cc: Add functions for xtheadvector. * config/riscv/riscv.cc (riscv_v_ext_vls_mode_p): Guard XTheadVector. (riscv_v_adjust_bytesize): Likewise. (riscv_preferred_simd_mode): Likewsie. (riscv_autovectorize_vector_modes): Likewise. (riscv_vector_mode_supported_any_target_p): Likewise. (TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P): Likewise. * config/riscv/vector-iterators.md: Remove fractional LMUL. * config/riscv/vector.md: Include thead-vector.md. * config/riscv/riscv_th_vector.h: New file. * config/riscv/thead-vector.md: New file. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/base/pragma-1.c: Add XTheadVector. * gcc.target/riscv/rvv/base/abi-1.c: Exclude XTheadVector. * lib/target-supports.exp: Add target for XTheadVector. Co-authored-by: Jin Ma Co-authored-by: Xianmiao Qu Co-authored-by: Christoph Müllner --- gcc/config.gcc | 2 +- gcc/config/riscv/autovec.md | 2 +- gcc/config/riscv/predicates.md | 8 +- gcc/config/riscv/riscv-string.cc | 3 + gcc/config/riscv/riscv-v.cc | 13 +- .../riscv/riscv-vector-builtins-shapes.cc | 23 +++ gcc/config/riscv/riscv-vector-switch.def | 150 +++++++------- gcc/config/riscv/riscv-vsetvl.cc | 10 + gcc/config/riscv/riscv.cc | 20 +- gcc/config/riscv/riscv_th_vector.h | 49 +++++ gcc/config/riscv/thead-vector.md | 120 +++++++++++ gcc/config/riscv/vector-iterators.md | 186 +++++++++--------- gcc/config/riscv/vector.md | 36 +++- .../gcc.target/riscv/rvv/base/abi-1.c | 2 +- .../gcc.target/riscv/rvv/base/pragma-1.c | 2 +- gcc/testsuite/lib/target-supports.exp | 12 ++ 16 files changed, 449 insertions(+), 189 deletions(-) create mode 100644 gcc/config/riscv/riscv_th_vector.h create mode 100644 gcc/config/riscv/thead-vector.md diff --git a/gcc/config.gcc b/gcc/config.gcc index f0676c830e8..1445d98c147 100644 --- a/gcc/config.gcc +++ b/gcc/config.gcc @@ -549,7 +549,7 @@ riscv*) extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o" extra_objs="${extra_objs} thead.o riscv-target-attr.o" d_target_objs="riscv-d.o" - extra_headers="riscv_vector.h" + extra_headers="riscv_vector.h riscv_th_vector.h" target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc" target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.h" ;; diff --git a/gcc/config/riscv/autovec.md b/gcc/config/riscv/autovec.md index 8b8a92f10a1..1fac56c7095 100644 --- a/gcc/config/riscv/autovec.md +++ b/gcc/config/riscv/autovec.md @@ -2579,7 +2579,7 @@ [(match_operand 0 "register_operand") (match_operand 1 "memory_operand") (match_operand:ANYI 2 "const_int_operand")] - "TARGET_VECTOR" + "TARGET_VECTOR && !TARGET_XTHEADVECTOR" { riscv_vector::expand_rawmemchr(mode, operands[0], operands[1], operands[2]); diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md index 30689ee0a6a..16f86c5ae97 100644 --- a/gcc/config/riscv/predicates.md +++ b/gcc/config/riscv/predicates.md @@ -64,8 +64,9 @@ (match_operand 0 "register_operand"))) (define_predicate "vector_csr_operand" - (ior (match_operand 0 "const_csr_operand") - (match_operand 0 "register_operand"))) + (ior (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)") + (match_operand 0 "const_csr_operand")) + (match_operand 0 "register_operand"))) ;; V has 32-bit unsigned immediates. This happens to be the same constraint as ;; the csr_operand, but it's not CSR related. @@ -432,7 +433,8 @@ ;; Predicates for the V extension. (define_special_predicate "vector_length_operand" (ior (match_operand 0 "pmode_register_operand") - (match_operand 0 "const_csr_operand"))) + (and (match_test "!TARGET_XTHEADVECTOR || rtx_equal_p (op, const0_rtx)") + (match_operand 0 "const_csr_operand")))) (define_special_predicate "autovec_length_operand" (ior (match_operand 0 "pmode_register_operand") diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc index 11c1f74d0b3..ec8f3486fd8 100644 --- a/gcc/config/riscv/riscv-string.cc +++ b/gcc/config/riscv/riscv-string.cc @@ -808,6 +808,9 @@ expand_block_move (rtx dst_in, rtx src_in, rtx length_in) bnez a2, loop # Any more? ret # Return */ + if (TARGET_XTHEADVECTOR) + return false; + gcc_assert (TARGET_VECTOR); HOST_WIDE_INT potential_ew diff --git a/gcc/config/riscv/riscv-v.cc b/gcc/config/riscv/riscv-v.cc index 038ab084a37..5e9e45aecd2 100644 --- a/gcc/config/riscv/riscv-v.cc +++ b/gcc/config/riscv/riscv-v.cc @@ -1523,6 +1523,13 @@ legitimize_move (rtx dest, rtx *srcp) return true; } + if (TARGET_XTHEADVECTOR) + { + emit_insn (gen_pred_th_whole_mov (mode, dest, src, + RVV_VLMAX, GEN_INT(VLMAX))); + return true; + } + if (riscv_v_ext_vls_mode_p (mode)) { if (GET_MODE_NUNITS (mode).to_constant () <= 31) @@ -1772,7 +1779,7 @@ get_prefer_tail_policy () compiler pick up either agnostic or undisturbed. Maybe we will have a compile option like -mprefer=agnostic to set this value???. */ - return TAIL_ANY; + return TARGET_XTHEADVECTOR ? TAIL_AGNOSTIC : TAIL_ANY; } /* Get prefer mask policy. */ @@ -1783,7 +1790,7 @@ get_prefer_mask_policy () compiler pick up either agnostic or undisturbed. Maybe we will have a compile option like -mprefer=agnostic to set this value???. */ - return MASK_ANY; + return TARGET_XTHEADVECTOR ? MASK_UNDISTURBED : MASK_ANY; } /* Get avl_type rtx. */ @@ -4383,7 +4390,7 @@ cmp_lmul_gt_one (machine_mode mode) bool vls_mode_valid_p (machine_mode vls_mode) { - if (!TARGET_VECTOR) + if (!TARGET_VECTOR || TARGET_XTHEADVECTOR) return false; if (riscv_autovec_preference == RVV_SCALABLE) diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc index 4a754e0228f..6b49404a1fa 100644 --- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc +++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc @@ -33,6 +33,25 @@ namespace riscv_vector { +/* Check whether the RETURN_TYPE and ARGUMENT_TYPES are + valid for the function. */ + +static bool +check_type (tree return_type, vec &argument_types) +{ + tree arg; + unsigned i; + + if (!return_type) + return false; + + FOR_EACH_VEC_ELT (argument_types, i, arg) + if (!arg) + return false; + + return true; +} + /* Add one function instance for GROUP, using operand suffix at index OI, mode suffix at index PAIR && bi and predication suffix at index pred_idx. */ static void @@ -49,6 +68,10 @@ build_one (function_builder &b, const function_group_info &group, group.ops_infos.types[vec_type_idx].index); b.allocate_argument_types (function_instance, argument_types); b.apply_predication (function_instance, return_type, argument_types); + + if (TARGET_XTHEADVECTOR && !check_type (return_type, argument_types)) + return; + b.add_overloaded_function (function_instance, *group.shape); b.add_unique_function (function_instance, (*group.shape), return_type, argument_types); diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def index 5c9f9bcbc3e..f7a66b34bae 100644 --- a/gcc/config/riscv/riscv-vector-switch.def +++ b/gcc/config/riscv/riscv-vector-switch.def @@ -68,9 +68,9 @@ Encode the ratio of SEW/LMUL into the mask types. #endif /* Disable modes if TARGET_MIN_VLEN == 32. */ -ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, LMUL_F8, 64) -ENTRY (RVVMF32BI, true, LMUL_F4, 32) -ENTRY (RVVMF16BI, true, LMUL_F2, 16) +ENTRY (RVVMF64BI, TARGET_MIN_VLEN > 32, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F8, 64) +ENTRY (RVVMF32BI, true, TARGET_XTHEADVECTOR ? LMUL_1 :LMUL_F4, 32) +ENTRY (RVVMF16BI, true, TARGET_XTHEADVECTOR ? LMUL_1 : LMUL_F2 , 16) ENTRY (RVVMF8BI, true, LMUL_1, 8) ENTRY (RVVMF4BI, true, LMUL_2, 4) ENTRY (RVVMF2BI, true, LMUL_4, 2) @@ -81,39 +81,39 @@ ENTRY (RVVM8QI, true, LMUL_8, 1) ENTRY (RVVM4QI, true, LMUL_4, 2) ENTRY (RVVM2QI, true, LMUL_2, 4) ENTRY (RVVM1QI, true, LMUL_1, 8) -ENTRY (RVVMF2QI, true, LMUL_F2, 16) -ENTRY (RVVMF4QI, true, LMUL_F4, 32) -ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32, LMUL_F8, 64) +ENTRY (RVVMF2QI, !TARGET_XTHEADVECTOR, LMUL_F2, 16) +ENTRY (RVVMF4QI, !TARGET_XTHEADVECTOR, LMUL_F4, 32) +ENTRY (RVVMF8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F8, 64) /* Disable modes if TARGET_MIN_VLEN == 32. */ ENTRY (RVVM8HI, true, LMUL_8, 2) ENTRY (RVVM4HI, true, LMUL_4, 4) ENTRY (RVVM2HI, true, LMUL_2, 8) ENTRY (RVVM1HI, true, LMUL_1, 16) -ENTRY (RVVMF2HI, true, LMUL_F2, 32) -ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32, LMUL_F4, 64) +ENTRY (RVVMF2HI, !TARGET_XTHEADVECTOR, LMUL_F2, 32) +ENTRY (RVVMF4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64) /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_16. */ ENTRY (RVVM8HF, TARGET_VECTOR_ELEN_FP_16, LMUL_8, 2) ENTRY (RVVM4HF, TARGET_VECTOR_ELEN_FP_16, LMUL_4, 4) ENTRY (RVVM2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_2, 8) ENTRY (RVVM1HF, TARGET_VECTOR_ELEN_FP_16, LMUL_1, 16) -ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16, LMUL_F2, 32) -ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, LMUL_F4, 64) +ENTRY (RVVMF2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, LMUL_F2, 32) +ENTRY (RVVMF4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F4, 64) /* Disable modes if TARGET_MIN_VLEN == 32. */ ENTRY (RVVM8SI, true, LMUL_8, 4) ENTRY (RVVM4SI, true, LMUL_4, 8) ENTRY (RVVM2SI, true, LMUL_2, 16) ENTRY (RVVM1SI, true, LMUL_1, 32) -ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32, LMUL_F2, 64) +ENTRY (RVVMF2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64) /* Disable modes if TARGET_MIN_VLEN == 32 or !TARGET_VECTOR_ELEN_FP_32. */ ENTRY (RVVM8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4) ENTRY (RVVM4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8) ENTRY (RVVM2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16) ENTRY (RVVM1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32) -ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, LMUL_F2, 64) +ENTRY (RVVMF2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, LMUL_F2, 64) /* Disable modes if !TARGET_VECTOR_ELEN_64. */ ENTRY (RVVM8DI, TARGET_VECTOR_ELEN_64, LMUL_8, 8) @@ -140,127 +140,127 @@ ENTRY (RVVM1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_1, 64) #endif TUPLE_ENTRY (RVVM1x8QI, true, RVVM1QI, 8, LMUL_1, 8) -TUPLE_ENTRY (RVVMF2x8QI, true, RVVMF2QI, 8, LMUL_F2, 16) -TUPLE_ENTRY (RVVMF4x8QI, true, RVVMF4QI, 8, LMUL_F4, 32) -TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 8, LMUL_F8, 64) +TUPLE_ENTRY (RVVMF2x8QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 8, LMUL_F2, 16) +TUPLE_ENTRY (RVVMF4x8QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 8, LMUL_F4, 32) +TUPLE_ENTRY (RVVMF8x8QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 8, LMUL_F8, 64) TUPLE_ENTRY (RVVM1x7QI, true, RVVM1QI, 7, LMUL_1, 8) -TUPLE_ENTRY (RVVMF2x7QI, true, RVVMF2QI, 7, LMUL_F2, 16) -TUPLE_ENTRY (RVVMF4x7QI, true, RVVMF4QI, 7, LMUL_F4, 32) -TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 7, LMUL_F8, 64) +TUPLE_ENTRY (RVVMF2x7QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 7, LMUL_F2, 16) +TUPLE_ENTRY (RVVMF4x7QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 7, LMUL_F4, 32) +TUPLE_ENTRY (RVVMF8x7QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 7, LMUL_F8, 64) TUPLE_ENTRY (RVVM1x6QI, true, RVVM1QI, 6, LMUL_1, 8) -TUPLE_ENTRY (RVVMF2x6QI, true, RVVMF2QI, 6, LMUL_F2, 16) -TUPLE_ENTRY (RVVMF4x6QI, true, RVVMF4QI, 6, LMUL_F4, 32) -TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 6, LMUL_F8, 64) +TUPLE_ENTRY (RVVMF2x6QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 6, LMUL_F2, 16) +TUPLE_ENTRY (RVVMF4x6QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 6, LMUL_F4, 32) +TUPLE_ENTRY (RVVMF8x6QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 6, LMUL_F8, 64) TUPLE_ENTRY (RVVM1x5QI, true, RVVM1QI, 5, LMUL_1, 8) -TUPLE_ENTRY (RVVMF2x5QI, true, RVVMF2QI, 5, LMUL_F2, 16) -TUPLE_ENTRY (RVVMF4x5QI, true, RVVMF4QI, 5, LMUL_F4, 32) -TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 5, LMUL_F8, 64) +TUPLE_ENTRY (RVVMF2x5QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 5, LMUL_F2, 16) +TUPLE_ENTRY (RVVMF4x5QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 5, LMUL_F4, 32) +TUPLE_ENTRY (RVVMF8x5QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 5, LMUL_F8, 64) TUPLE_ENTRY (RVVM2x4QI, true, RVVM2QI, 4, LMUL_2, 4) TUPLE_ENTRY (RVVM1x4QI, true, RVVM1QI, 4, LMUL_1, 8) -TUPLE_ENTRY (RVVMF2x4QI, true, RVVMF2QI, 4, LMUL_F2, 16) -TUPLE_ENTRY (RVVMF4x4QI, true, RVVMF4QI, 4, LMUL_F4, 32) -TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 4, LMUL_F8, 64) +TUPLE_ENTRY (RVVMF2x4QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 4, LMUL_F2, 16) +TUPLE_ENTRY (RVVMF4x4QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 4, LMUL_F4, 32) +TUPLE_ENTRY (RVVMF8x4QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 4, LMUL_F8, 64) TUPLE_ENTRY (RVVM2x3QI, true, RVVM2QI, 3, LMUL_2, 4) TUPLE_ENTRY (RVVM1x3QI, true, RVVM1QI, 3, LMUL_1, 8) -TUPLE_ENTRY (RVVMF2x3QI, true, RVVMF2QI, 3, LMUL_F2, 16) -TUPLE_ENTRY (RVVMF4x3QI, true, RVVMF4QI, 3, LMUL_F4, 32) -TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 3, LMUL_F8, 64) +TUPLE_ENTRY (RVVMF2x3QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 3, LMUL_F2, 16) +TUPLE_ENTRY (RVVMF4x3QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 3, LMUL_F4, 32) +TUPLE_ENTRY (RVVMF8x3QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 3, LMUL_F8, 64) TUPLE_ENTRY (RVVM4x2QI, true, RVVM4QI, 2, LMUL_4, 2) TUPLE_ENTRY (RVVM2x2QI, true, RVVM2QI, 2, LMUL_2, 4) TUPLE_ENTRY (RVVM1x2QI, true, RVVM1QI, 2, LMUL_1, 8) -TUPLE_ENTRY (RVVMF2x2QI, true, RVVMF2QI, 2, LMUL_F2, 16) -TUPLE_ENTRY (RVVMF4x2QI, true, RVVMF4QI, 2, LMUL_F4, 32) -TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32, RVVMF8QI, 2, LMUL_F8, 64) +TUPLE_ENTRY (RVVMF2x2QI, !TARGET_XTHEADVECTOR, RVVMF2QI, 2, LMUL_F2, 16) +TUPLE_ENTRY (RVVMF4x2QI, !TARGET_XTHEADVECTOR, RVVMF4QI, 2, LMUL_F4, 32) +TUPLE_ENTRY (RVVMF8x2QI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF8QI, 2, LMUL_F8, 64) TUPLE_ENTRY (RVVM1x8HI, true, RVVM1HI, 8, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x8HI, true, RVVMF2HI, 8, LMUL_F2, 32) -TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 8, LMUL_F4, 64) +TUPLE_ENTRY (RVVMF2x8HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 8, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF4x8HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 8, LMUL_F4, 64) TUPLE_ENTRY (RVVM1x7HI, true, RVVM1HI, 7, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x7HI, true, RVVMF2HI, 7, LMUL_F2, 32) -TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 7, LMUL_F4, 64) +TUPLE_ENTRY (RVVMF2x7HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 7, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF4x7HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 7, LMUL_F4, 64) TUPLE_ENTRY (RVVM1x6HI, true, RVVM1HI, 6, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x6HI, true, RVVMF2HI, 6, LMUL_F2, 32) -TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 6, LMUL_F4, 64) +TUPLE_ENTRY (RVVMF2x6HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 6, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF4x6HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 6, LMUL_F4, 64) TUPLE_ENTRY (RVVM1x5HI, true, RVVM1HI, 5, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x5HI, true, RVVMF2HI, 5, LMUL_F2, 32) -TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 5, LMUL_F4, 64) +TUPLE_ENTRY (RVVMF2x5HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 5, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF4x5HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 5, LMUL_F4, 64) TUPLE_ENTRY (RVVM2x4HI, true, RVVM2HI, 4, LMUL_2, 8) TUPLE_ENTRY (RVVM1x4HI, true, RVVM1HI, 4, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x4HI, true, RVVMF2HI, 4, LMUL_F2, 32) -TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 4, LMUL_F4, 64) +TUPLE_ENTRY (RVVMF2x4HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 4, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF4x4HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 4, LMUL_F4, 64) TUPLE_ENTRY (RVVM2x3HI, true, RVVM2HI, 3, LMUL_2, 8) TUPLE_ENTRY (RVVM1x3HI, true, RVVM1HI, 3, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x3HI, true, RVVMF2HI, 3, LMUL_F2, 32) -TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 3, LMUL_F4, 64) +TUPLE_ENTRY (RVVMF2x3HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 3, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF4x3HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 3, LMUL_F4, 64) TUPLE_ENTRY (RVVM4x2HI, true, RVVM4HI, 2, LMUL_4, 4) TUPLE_ENTRY (RVVM2x2HI, true, RVVM2HI, 2, LMUL_2, 8) TUPLE_ENTRY (RVVM1x2HI, true, RVVM1HI, 2, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x2HI, true, RVVMF2HI, 2, LMUL_F2, 32) -TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32, RVVMF4HI, 2, LMUL_F4, 64) +TUPLE_ENTRY (RVVMF2x2HI, !TARGET_XTHEADVECTOR, RVVMF2HI, 2, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF4x2HI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HI, 2, LMUL_F4, 64) TUPLE_ENTRY (RVVM1x8HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 8, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 8, LMUL_F2, 32) -TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 8, LMUL_F4, 64) +TUPLE_ENTRY (RVVMF2x8HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 8, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF4x8HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 8, LMUL_F4, 64) TUPLE_ENTRY (RVVM1x7HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 7, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 7, LMUL_F2, 32) -TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 7, LMUL_F4, 64) +TUPLE_ENTRY (RVVMF2x7HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 7, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF4x7HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 7, LMUL_F4, 64) TUPLE_ENTRY (RVVM1x6HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 6, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 6, LMUL_F2, 32) -TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 6, LMUL_F4, 64) +TUPLE_ENTRY (RVVMF2x6HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 6, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF4x6HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 6, LMUL_F4, 64) TUPLE_ENTRY (RVVM1x5HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 5, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 5, LMUL_F2, 32) -TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 5, LMUL_F4, 64) +TUPLE_ENTRY (RVVMF2x5HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 5, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF4x5HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 5, LMUL_F4, 64) TUPLE_ENTRY (RVVM2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 4, LMUL_2, 8) TUPLE_ENTRY (RVVM1x4HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 4, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 4, LMUL_F2, 32) -TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 4, LMUL_F4, 64) +TUPLE_ENTRY (RVVMF2x4HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 4, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF4x4HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 4, LMUL_F4, 64) TUPLE_ENTRY (RVVM2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 3, LMUL_2, 8) TUPLE_ENTRY (RVVM1x3HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 3, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 3, LMUL_F2, 32) -TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 3, LMUL_F4, 64) +TUPLE_ENTRY (RVVMF2x3HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 3, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF4x3HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 3, LMUL_F4, 64) TUPLE_ENTRY (RVVM4x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM4HF, 2, LMUL_4, 4) TUPLE_ENTRY (RVVM2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM2HF, 2, LMUL_2, 8) TUPLE_ENTRY (RVVM1x2HF, TARGET_VECTOR_ELEN_FP_16, RVVM1HF, 2, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16, RVVMF2HF, 2, LMUL_F2, 32) -TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32, RVVMF4HF, 2, LMUL_F4, 64) +TUPLE_ENTRY (RVVMF2x2HF, TARGET_VECTOR_ELEN_FP_16 && !TARGET_XTHEADVECTOR, RVVMF2HF, 2, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF4x2HF, TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF4HF, 2, LMUL_F4, 64) TUPLE_ENTRY (RVVM1x8SI, true, RVVM1SI, 8, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x8SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 8, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF2x8SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 8, LMUL_F2, 32) TUPLE_ENTRY (RVVM1x7SI, true, RVVM1SI, 7, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x7SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 7, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF2x7SI, (TARGET_MIN_VLEN > 32) && !TARGET_XTHEADVECTOR, RVVMF2SI, 7, LMUL_F2, 32) TUPLE_ENTRY (RVVM1x6SI, true, RVVM1SI, 6, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 6, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF2x6SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 6, LMUL_F2, 32) TUPLE_ENTRY (RVVM1x5SI, true, RVVM1SI, 5, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 5, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF2x5SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 5, LMUL_F2, 32) TUPLE_ENTRY (RVVM2x4SI, true, RVVM2SI, 4, LMUL_2, 8) TUPLE_ENTRY (RVVM1x4SI, true, RVVM1SI, 4, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 4, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF2x4SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 4, LMUL_F2, 32) TUPLE_ENTRY (RVVM2x3SI, true, RVVM2SI, 3, LMUL_2, 8) TUPLE_ENTRY (RVVM1x3SI, true, RVVM1SI, 3, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 3, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF2x3SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 3, LMUL_F2, 32) TUPLE_ENTRY (RVVM4x2SI, true, RVVM4SI, 2, LMUL_4, 4) TUPLE_ENTRY (RVVM2x2SI, true, RVVM2SI, 2, LMUL_2, 8) TUPLE_ENTRY (RVVM1x2SI, true, RVVM1SI, 2, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32, RVVMF2SI, 2, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF2x2SI, TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SI, 2, LMUL_F2, 32) TUPLE_ENTRY (RVVM1x8SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 8, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 8, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF2x8SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 8, LMUL_F2, 32) TUPLE_ENTRY (RVVM1x7SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 7, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 7, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF2x7SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 7, LMUL_F2, 32) TUPLE_ENTRY (RVVM1x6SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 6, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 6, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF2x6SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 6, LMUL_F2, 32) TUPLE_ENTRY (RVVM1x5SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 5, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 5, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF2x5SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 5, LMUL_F2, 32) TUPLE_ENTRY (RVVM2x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 4, LMUL_2, 8) TUPLE_ENTRY (RVVM1x4SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 4, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 4, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF2x4SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 4, LMUL_F2, 32) TUPLE_ENTRY (RVVM2x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 3, LMUL_2, 8) TUPLE_ENTRY (RVVM1x3SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 3, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 3, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF2x3SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 3, LMUL_F2, 32) TUPLE_ENTRY (RVVM4x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM4SF, 2, LMUL_4, 4) TUPLE_ENTRY (RVVM2x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM2SF, 2, LMUL_2, 8) TUPLE_ENTRY (RVVM1x2SF, TARGET_VECTOR_ELEN_FP_32, RVVM1SF, 2, LMUL_1, 16) -TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32, RVVMF2SF, 2, LMUL_F2, 32) +TUPLE_ENTRY (RVVMF2x2SF, TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && !TARGET_XTHEADVECTOR, RVVMF2SF, 2, LMUL_F2, 32) TUPLE_ENTRY (RVVM1x8DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 8, LMUL_1, 16) TUPLE_ENTRY (RVVM1x7DI, TARGET_VECTOR_ELEN_64, RVVM1DI, 7, LMUL_1, 16) diff --git a/gcc/config/riscv/riscv-vsetvl.cc b/gcc/config/riscv/riscv-vsetvl.cc index eabaef80f89..f3496d9e72e 100644 --- a/gcc/config/riscv/riscv-vsetvl.cc +++ b/gcc/config/riscv/riscv-vsetvl.cc @@ -1117,6 +1117,16 @@ public: avl = GEN_INT (0); rtx sew = gen_int_mode (get_sew (), Pmode); rtx vlmul = gen_int_mode (get_vlmul (), Pmode); + + if (TARGET_XTHEADVECTOR) { + if (change_vtype_only_p ()) + return gen_th_vsetvl_vtype_change_only (sew, vlmul); + else if (has_vl () && !ignore_vl) + return gen_th_vsetvl (Pmode, get_vl (), avl, sew, vlmul); + else + return gen_th_vsetvl_discard_result (Pmode, avl, sew, vlmul); + } + rtx ta = gen_int_mode (get_ta (), Pmode); rtx ma = gen_int_mode (get_ma (), Pmode); diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc index 30e6ced5f3f..d06401c46c8 100644 --- a/gcc/config/riscv/riscv.cc +++ b/gcc/config/riscv/riscv.cc @@ -1406,6 +1406,9 @@ riscv_v_adjust_bytesize (machine_mode mode, int scale) { if (riscv_v_ext_vector_mode_p (mode)) { + if (TARGET_XTHEADVECTOR) + return BYTES_PER_RISCV_VECTOR; + poly_int64 nunits = GET_MODE_NUNITS (mode); poly_int64 mode_size = GET_MODE_SIZE (mode); @@ -9970,7 +9973,7 @@ riscv_use_divmod_expander (void) static machine_mode riscv_preferred_simd_mode (scalar_mode mode) { - if (TARGET_VECTOR) + if (TARGET_VECTOR && !TARGET_XTHEADVECTOR) return riscv_vector::preferred_simd_mode (mode); return word_mode; @@ -10321,7 +10324,7 @@ riscv_mode_priority (int, int n) unsigned int riscv_autovectorize_vector_modes (vector_modes *modes, bool all) { - if (TARGET_VECTOR) + if (TARGET_VECTOR && !TARGET_XTHEADVECTOR) return riscv_vector::autovectorize_vector_modes (modes, all); return default_autovectorize_vector_modes (modes, all); @@ -10504,6 +10507,16 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset) return false; } +/* Implements target hook vector_mode_supported_any_target_p. */ + +static bool +riscv_vector_mode_supported_any_target_p (machine_mode mode) +{ + if (TARGET_XTHEADVECTOR) + return false; + return true; +} + /* Initialize the GCC target structure. */ #undef TARGET_ASM_ALIGNED_HI_OP #define TARGET_ASM_ALIGNED_HI_OP "\t.half\t" @@ -10847,6 +10860,9 @@ extract_base_offset_in_addr (rtx mem, rtx *base, rtx *offset) #undef TARGET_PREFERRED_ELSE_VALUE #define TARGET_PREFERRED_ELSE_VALUE riscv_preferred_else_value +#undef TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P +#define TARGET_VECTOR_MODE_SUPPORTED_ANY_TARGET_P riscv_vector_mode_supported_any_target_p + struct gcc_target targetm = TARGET_INITIALIZER; #include "gt-riscv.h" diff --git a/gcc/config/riscv/riscv_th_vector.h b/gcc/config/riscv/riscv_th_vector.h new file mode 100644 index 00000000000..6f47e0c90a4 --- /dev/null +++ b/gcc/config/riscv/riscv_th_vector.h @@ -0,0 +1,49 @@ +/* RISC-V 'XTheadVector' Extension intrinsics include file. + Copyright (C) 2022-2023 Free Software Foundation, Inc. + + This file is part of GCC. + + GCC is free software; you can redistribute it and/or modify it + under the terms of the GNU General Public License as published + by the Free Software Foundation; either version 3, or (at your + option) any later version. + + GCC is distributed in the hope that it will be useful, but WITHOUT + ANY WARRANTY; without even the implied warranty of MERCHANTABILITY + or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public + License for more details. + + Under Section 7 of GPL version 3, you are granted additional + permissions described in the GCC Runtime Library Exception, version + 3.1, as published by the Free Software Foundation. + + You should have received a copy of the GNU General Public License and + a copy of the GCC Runtime Library Exception along with this program; + see the files COPYING3 and COPYING.RUNTIME respectively. If not, see + . */ + +#ifndef __RISCV_TH_VECTOR_H +#define __RISCV_TH_VECTOR_H + +#include +#include + +#ifndef __riscv_xtheadvector +#error "XTheadVector intrinsics require the xtheadvector extension." +#else +#ifdef __cplusplus +extern "C" { +#endif + +/* NOTE: This implementation of riscv_th_vector.h is intentionally short. It does + not define the RVV types and intrinsic functions directly in C and C++ + code, but instead uses the following pragma to tell GCC to insert the + necessary type and function definitions itself. The net effect is the + same, and the file is a complete implementation of riscv_th_vector.h. */ +#pragma riscv intrinsic "vector" + +#ifdef __cplusplus +} +#endif // __cplusplus +#endif // __riscv_xtheadvector +#endif // __RISCV_TH_ECTOR_H diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md new file mode 100644 index 00000000000..ebbb25f8f2a --- /dev/null +++ b/gcc/config/riscv/thead-vector.md @@ -0,0 +1,120 @@ +(define_c_enum "unspec" [ + UNSPEC_TH_VWLDST +]) + +(define_mode_iterator V_VLS_VT [V VLS VT]) +(define_mode_iterator V_VB_VLS_VT [V VB VLS VT]) + +(define_split + [(set (match_operand:V_VB_VLS_VT 0 "reg_or_mem_operand") + (match_operand:V_VB_VLS_VT 1 "reg_or_mem_operand"))] + "TARGET_XTHEADVECTOR" + [(const_int 0)] + { + emit_insn (gen_pred_th_whole_mov (mode, operands[0], operands[1], + RVV_VLMAX, GEN_INT(riscv_vector::VLMAX))); + DONE; + }) + +(define_insn_and_split "@pred_th_whole_mov" + [(set (match_operand:V_VLS_VT 0 "reg_or_mem_operand" "=vr,vr, m") + (unspec:V_VLS_VT + [(match_operand:V_VLS_VT 1 "reg_or_mem_operand" " vr, m,vr") + (match_operand 2 "vector_length_operand" " rK, rK, rK") + (match_operand 3 "const_1_operand" " i, i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] + UNSPEC_TH_VWLDST))] + "TARGET_XTHEADVECTOR" + "@ + vmv.v.v\t%0,%1 + vle.v\t%0,%1 + vse.v\t%1,%0" + "&& REG_P (operands[0]) && REG_P (operands[1]) + && REGNO (operands[0]) == REGNO (operands[1])" + [(const_int 0)] + "" + [(set_attr "type" "vimov,vlds,vlds") + (set_attr "mode" "") + (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED")) + (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED")) + (set (attr "avl_type_idx") (const_int 3)) + (set_attr "vl_op_idx" "2")]) + +(define_insn_and_split "@pred_th_whole_mov" + [(set (match_operand:VB 0 "reg_or_mem_operand" "=vr,vr, m") + (unspec:VB + [(match_operand:VB 1 "reg_or_mem_operand" " vr, m,vr") + (match_operand 2 "vector_length_operand" " rK, rK, rK") + (match_operand 3 "const_1_operand" " i, i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] + UNSPEC_TH_VWLDST))] + "TARGET_XTHEADVECTOR" + "@ + vmv.v.v\t%0,%1 + vle.v\t%0,%1 + vse.v\t%1,%0" + "&& REG_P (operands[0]) && REG_P (operands[1]) + && REGNO (operands[0]) == REGNO (operands[1])" + [(const_int 0)] + "" + [(set_attr "type" "vimov,vlds,vlds") + (set_attr "mode" "") + (set (attr "ta") (symbol_ref "riscv_vector::TAIL_UNDISTURBED")) + (set (attr "ma") (symbol_ref "riscv_vector::MASK_UNDISTURBED")) + (set (attr "avl_type_idx") (const_int 3)) + (set_attr "vl_op_idx" "2") + (set (attr "sew") (const_int 8)) + (set (attr "vlmul") (symbol_ref "riscv_vector::LMUL_1"))]) + +(define_insn "@th_vsetvl" + [(set (match_operand:P 0 "register_operand" "=r") + (unspec:P [(match_operand:P 1 "vector_csr_operand" "rK") + (match_operand 2 "const_int_operand" "i") + (match_operand 3 "const_int_operand" "i")] UNSPEC_VSETVL)) + (set (reg:SI VL_REGNUM) + (unspec:SI [(match_dup 1) + (match_dup 2) + (match_dup 3)] UNSPEC_VSETVL)) + (set (reg:SI VTYPE_REGNUM) + (unspec:SI [(match_dup 2) + (match_dup 3)] UNSPEC_VSETVL))] + "TARGET_XTHEADVECTOR" + "vsetvli\t%0,%1,e%2,%m3" + [(set_attr "type" "vsetvl") + (set_attr "mode" "") + (set (attr "sew") (symbol_ref "INTVAL (operands[2])")) + (set (attr "vlmul") (symbol_ref "INTVAL (operands[3])"))]) + +;; vsetvl zero,zero,vtype instruction. +;; This pattern has no side effects and does not set X0 register. +(define_insn "th_vsetvl_vtype_change_only" + [(set (reg:SI VTYPE_REGNUM) + (unspec:SI + [(match_operand 0 "const_int_operand" "i") + (match_operand 1 "const_int_operand" "i")] UNSPEC_VSETVL))] + "TARGET_XTHEADVECTOR" + "vsetvli\tzero,zero,e%0,%m1" + [(set_attr "type" "vsetvl") + (set_attr "mode" "SI") + (set (attr "sew") (symbol_ref "INTVAL (operands[0])")) + (set (attr "vlmul") (symbol_ref "INTVAL (operands[1])"))]) + +;; vsetvl zero,rs1,vtype instruction. +;; The reason we need this pattern since we should avoid setting X0 register +;; in vsetvl instruction pattern. +(define_insn "@th_vsetvl_discard_result" + [(set (reg:SI VL_REGNUM) + (unspec:SI [(match_operand:P 0 "vector_csr_operand" "rK") + (match_operand 1 "const_int_operand" "i") + (match_operand 2 "const_int_operand" "i")] UNSPEC_VSETVL)) + (set (reg:SI VTYPE_REGNUM) + (unspec:SI [(match_dup 1) + (match_dup 2)] UNSPEC_VSETVL))] + "TARGET_XTHEADVECTOR" + "vsetvli\tzero,%0,e%1,%m2" + [(set_attr "type" "vsetvl") + (set_attr "mode" "") + (set (attr "sew") (symbol_ref "INTVAL (operands[1])")) + (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))]) \ No newline at end of file diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md index 5f5f7b5b986..c0fc7a2441d 100644 --- a/gcc/config/riscv/vector-iterators.md +++ b/gcc/config/riscv/vector-iterators.md @@ -109,11 +109,11 @@ ]) (define_mode_iterator VI [ - RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32") + RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") - RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32") + RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") - RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") + RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64") (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64") @@ -128,11 +128,11 @@ ;; allow the instruction and mode to be matched during combine et al. (define_mode_iterator VF [ (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") - (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH") - (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32") + (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") + (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32") (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") - (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") + (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64") (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64") @@ -140,11 +140,11 @@ (define_mode_iterator VF_ZVFHMIN [ (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16") - (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") - (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32") + (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") + (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32") (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") - (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") + (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64") (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64") @@ -271,16 +271,16 @@ ]) (define_mode_iterator VEEWEXT2 [ - RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32") + RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16") - (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") - (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32") + (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") + (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32") - RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") + RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") - (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") + (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64") (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64") @@ -290,10 +290,10 @@ ]) (define_mode_iterator VEEWEXT4 [ - RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") + RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") - (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") + (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64") (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64") @@ -311,59 +311,59 @@ ]) (define_mode_iterator VEEWTRUNC2 [ - RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32") + RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") - RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32") + RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16") - (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") - (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32") + (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") + (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32") (RVVM4SI "TARGET_64BIT") (RVVM2SI "TARGET_64BIT") (RVVM1SI "TARGET_64BIT") - (RVVMF2SI "TARGET_MIN_VLEN > 32 && TARGET_64BIT") + (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_64BIT") - (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT") + (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32 && TARGET_64BIT") ]) (define_mode_iterator VEEWTRUNC4 [ - RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32") + RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") (RVVM2HI "TARGET_64BIT") (RVVM1HI "TARGET_64BIT") - (RVVMF2HI "TARGET_64BIT") - (RVVMF4HI "TARGET_MIN_VLEN > 32 && TARGET_64BIT") + (RVVMF2HI "!TARGET_XTHEADVECTOR && TARGET_64BIT") + (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT") (RVVM1HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT") - (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT") - (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT") + (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_64BIT") + (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32 && TARGET_64BIT") ]) (define_mode_iterator VEEWTRUNC8 [ (RVVM1QI "TARGET_64BIT") - (RVVMF2QI "TARGET_64BIT") - (RVVMF4QI "TARGET_64BIT") - (RVVMF8QI "TARGET_MIN_VLEN > 32 && TARGET_64BIT") + (RVVMF2QI "!TARGET_XTHEADVECTOR && TARGET_64BIT") + (RVVMF4QI "!TARGET_XTHEADVECTOR && TARGET_64BIT") + (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32 && TARGET_64BIT") ]) (define_mode_iterator VEI16 [ - RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32") + RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") - RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32") + RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") (RVVM8HF "TARGET_VECTOR_ELEN_FP_16") (RVVM4HF "TARGET_VECTOR_ELEN_FP_16") (RVVM2HF "TARGET_VECTOR_ELEN_FP_16") - (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") - (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32") + (RVVM1HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") + (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32") - RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") + RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") - (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") + (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64") (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64") @@ -452,11 +452,11 @@ ]) (define_mode_iterator VFULLI [ - RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32") + RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") - RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32") + RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") - RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") + RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") (RVVM8DI "TARGET_FULL_V") (RVVM4DI "TARGET_FULL_V") (RVVM2DI "TARGET_FULL_V") (RVVM1DI "TARGET_FULL_V") @@ -509,17 +509,17 @@ ]) (define_mode_iterator VI_QH [ - RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32") + RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") - RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32") + RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") ]) (define_mode_iterator VI_QHS [ - RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32") + RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") - RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32") + RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") - RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") + RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)") (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)") @@ -560,11 +560,11 @@ ]) (define_mode_iterator VI_QHS_NO_M8 [ - RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32") + RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") - RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32") + RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") - RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") + RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)") (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)") @@ -603,11 +603,11 @@ (define_mode_iterator VF_HS [ (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") - (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH") - (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32") + (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") + (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32") (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") - (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") + (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH") (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH") @@ -638,12 +638,12 @@ (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") - (RVVMF2HF "TARGET_ZVFH") - (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32") + (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") + (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") - (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") + (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") (V1HF "riscv_vector::vls_mode_valid_p (V1HFmode) && TARGET_ZVFH") (V2HF "riscv_vector::vls_mode_valid_p (V2HFmode) && TARGET_ZVFH") @@ -674,11 +674,11 @@ ]) (define_mode_iterator V_VLSI_QHS [ - RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32") + RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") - RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32") + RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") - RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") + RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") (V1QI "riscv_vector::vls_mode_valid_p (V1QImode)") (V2QI "riscv_vector::vls_mode_valid_p (V2QImode)") @@ -756,27 +756,27 @@ ;; E.g. when index mode = RVVM8QImde and Pmode = SImode, if it is not zero_extend or ;; scalar != 1, such gather/scatter is not allowed since we don't have RVVM32SImode. (define_mode_iterator RATIO64 [ - (RVVMF8QI "TARGET_MIN_VLEN > 32") - (RVVMF4HI "TARGET_MIN_VLEN > 32") - (RVVMF2SI "TARGET_MIN_VLEN > 32") + (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") + (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") + (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") (RVVM1DI "TARGET_VECTOR_ELEN_64") - (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32") - (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") + (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32") + (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64") ]) (define_mode_iterator RATIO32 [ - RVVMF4QI - RVVMF2HI + (RVVMF4QI "!TARGET_XTHEADVECTOR") + (RVVMF2HI "!TARGET_XTHEADVECTOR") RVVM1SI (RVVM2DI "TARGET_VECTOR_ELEN_64") - (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") + (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") ]) (define_mode_iterator RATIO16 [ - RVVMF2QI + (RVVMF2QI "!TARGET_XTHEADVECTOR") RVVM1HI RVVM2SI (RVVM4DI "TARGET_VECTOR_ELEN_64") @@ -814,21 +814,21 @@ ]) (define_mode_iterator RATIO64I [ - (RVVMF8QI "TARGET_MIN_VLEN > 32") - (RVVMF4HI "TARGET_MIN_VLEN > 32") - (RVVMF2SI "TARGET_MIN_VLEN > 32") + (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") + (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") + (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT") ]) (define_mode_iterator RATIO32I [ - RVVMF4QI - RVVMF2HI + (RVVMF4QI "!TARGET_XTHEADVECTOR") + (RVVMF2HI "!TARGET_XTHEADVECTOR") RVVM1SI (RVVM2DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT") ]) (define_mode_iterator RATIO16I [ - RVVMF2QI + (RVVMF2QI "!TARGET_XTHEADVECTOR") RVVM1HI RVVM2SI (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT") @@ -873,21 +873,21 @@ ]) (define_mode_iterator V_FRACT [ - RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32") + (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") - RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32") + (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") - (RVVMF2HF "TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32") + (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_MIN_VLEN > 32") - (RVVMF2SI "TARGET_MIN_VLEN > 32") + (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") - (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") + (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") ]) (define_mode_iterator VWEXTI [ - RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32") + RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") - RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") + RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64") (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64") @@ -933,7 +933,7 @@ (RVVM4SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32") - (RVVMF2SF "TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") + (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_16 && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64") (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64") @@ -966,7 +966,7 @@ (RVVM4SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32") - (RVVMF2SF "TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") + (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64") (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64") @@ -996,7 +996,7 @@ (define_mode_iterator VWCONVERTI [ (RVVM8SI "TARGET_ZVFH") (RVVM4SI "TARGET_ZVFH") (RVVM2SI "TARGET_ZVFH") (RVVM1SI "TARGET_ZVFH") - (RVVMF2SI "TARGET_ZVFH") + (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32") (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_VECTOR_ELEN_FP_32") @@ -1045,7 +1045,7 @@ ]) (define_mode_iterator VQEXTI [ - RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") + RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64") (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64") @@ -1456,11 +1456,11 @@ ;; VINDEXED [VI8 VI16 VI32 (VI64 "TARGET_64BIT")]. (define_mode_iterator VINDEXED [ - RVVM8QI RVVM4QI RVVM2QI RVVM1QI RVVMF2QI RVVMF4QI (RVVMF8QI "TARGET_MIN_VLEN > 32") + RVVM8QI RVVM4QI RVVM2QI RVVM1QI (RVVMF2QI "!TARGET_XTHEADVECTOR") (RVVMF4QI "!TARGET_XTHEADVECTOR") (RVVMF8QI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") - RVVM8HI RVVM4HI RVVM2HI RVVM1HI RVVMF2HI (RVVMF4HI "TARGET_MIN_VLEN > 32") + RVVM8HI RVVM4HI RVVM2HI RVVM1HI (RVVMF2HI "!TARGET_XTHEADVECTOR") (RVVMF4HI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") - RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") + RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "!TARGET_XTHEADVECTOR && TARGET_MIN_VLEN > 32") (RVVM8DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT") (RVVM4DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT") @@ -1468,12 +1468,12 @@ (RVVM1DI "TARGET_VECTOR_ELEN_64 && TARGET_64BIT") (RVVM8HF "TARGET_ZVFH") (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") - (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH") - (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32") + (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") + (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32") (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") - (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") + (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") (RVVM8DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64 && TARGET_64BIT") @@ -3173,11 +3173,11 @@ (define_mode_iterator V_VLS_F_CONVERT_SI [ (RVVM4HF "TARGET_ZVFH") (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") - (RVVMF2HF "TARGET_ZVFH") (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32") + (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32") (RVVM8SF "TARGET_VECTOR_ELEN_FP_32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") - (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") + (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64") @@ -3290,12 +3290,12 @@ ]) (define_mode_iterator V_VLS_F_CONVERT_DI [ - (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "TARGET_ZVFH") - (RVVMF4HF "TARGET_ZVFH && TARGET_MIN_VLEN > 32") + (RVVM2HF "TARGET_ZVFH") (RVVM1HF "TARGET_ZVFH") (RVVMF2HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH") + (RVVMF4HF "!TARGET_XTHEADVECTOR && TARGET_ZVFH && TARGET_MIN_VLEN > 32") (RVVM4SF "TARGET_VECTOR_ELEN_FP_32") (RVVM2SF "TARGET_VECTOR_ELEN_FP_32") (RVVM1SF "TARGET_VECTOR_ELEN_FP_32") - (RVVMF2SF "TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") + (RVVMF2SF "!TARGET_XTHEADVECTOR && TARGET_VECTOR_ELEN_FP_32 && TARGET_MIN_VLEN > 32") (RVVM8DF "TARGET_VECTOR_ELEN_FP_64") (RVVM4DF "TARGET_VECTOR_ELEN_FP_64") (RVVM2DF "TARGET_VECTOR_ELEN_FP_64") (RVVM1DF "TARGET_VECTOR_ELEN_FP_64") diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md index 036b2425f32..9941651341d 100644 --- a/gcc/config/riscv/vector.md +++ b/gcc/config/riscv/vector.md @@ -83,7 +83,7 @@ ;; check. However, we need default value of SEW for vsetvl instruction since there ;; is no field for ratio in the vsetvl instruction encoding. (define_attr "sew" "" - (cond [(eq_attr "mode" "RVVMF64BI,RVVMF32BI,RVVMF16BI,RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\ + (cond [(eq_attr "mode" "RVVMF8BI,RVVMF4BI,RVVMF2BI,RVVM1BI,\ RVVM8QI,RVVM4QI,RVVM2QI,RVVM1QI,RVVMF2QI,RVVMF4QI,RVVMF8QI,\ RVVM1x8QI,RVVMF2x8QI,RVVMF4x8QI,RVVMF8x8QI,\ RVVM1x7QI,RVVMF2x7QI,RVVMF4x7QI,RVVMF8x7QI,\ @@ -95,6 +95,18 @@ V1QI,V2QI,V4QI,V8QI,V16QI,V32QI,V64QI,V128QI,V256QI,V512QI,V1024QI,V2048QI,V4096QI,\ V1BI,V2BI,V4BI,V8BI,V16BI,V32BI,V64BI,V128BI,V256BI,V512BI,V1024BI,V2048BI,V4096BI") (const_int 8) + (eq_attr "mode" "RVVMF16BI") + (if_then_else (match_test "TARGET_XTHEADVECTOR") + (const_int 16) + (const_int 8)) + (eq_attr "mode" "RVVMF32BI") + (if_then_else (match_test "TARGET_XTHEADVECTOR") + (const_int 32) + (const_int 8)) + (eq_attr "mode" "RVVMF64BI") + (if_then_else (match_test "TARGET_XTHEADVECTOR") + (const_int 64) + (const_int 8)) (eq_attr "mode" "RVVM8HI,RVVM4HI,RVVM2HI,RVVM1HI,RVVMF2HI,RVVMF4HI,\ RVVM1x8HI,RVVMF2x8HI,RVVMF4x8HI,\ RVVM1x7HI,RVVMF2x7HI,RVVMF4x7HI,\ @@ -155,9 +167,9 @@ (eq_attr "mode" "RVVM4QI,RVVMF2BI") (symbol_ref "riscv_vector::LMUL_4") (eq_attr "mode" "RVVM2QI,RVVMF4BI") (symbol_ref "riscv_vector::LMUL_2") (eq_attr "mode" "RVVM1QI,RVVMF8BI") (symbol_ref "riscv_vector::LMUL_1") - (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "riscv_vector::LMUL_F2") - (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "riscv_vector::LMUL_F4") - (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "riscv_vector::LMUL_F8") + (eq_attr "mode" "RVVMF2QI,RVVMF16BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F2") + (eq_attr "mode" "RVVMF4QI,RVVMF32BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F4") + (eq_attr "mode" "RVVMF8QI,RVVMF64BI") (symbol_ref "TARGET_XTHEADVECTOR ? riscv_vector::LMUL_1 : riscv_vector::LMUL_F8") (eq_attr "mode" "RVVM8HI") (symbol_ref "riscv_vector::LMUL_8") (eq_attr "mode" "RVVM4HI") (symbol_ref "riscv_vector::LMUL_4") (eq_attr "mode" "RVVM2HI") (symbol_ref "riscv_vector::LMUL_2") @@ -428,6 +440,10 @@ vislide1up,vislide1down,vfslide1up,vfslide1down,\ vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox") (const_int INVALID_ATTRIBUTE) + (and (eq_attr "type" "vlde,vste,vlsegde,vssegte,vlsegds,vssegts,\ + vlsegdff,vssegtux,vlsegdox,vlsegdux") + (match_test "TARGET_XTHEADVECTOR")) + (const_int INVALID_ATTRIBUTE) (eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1) (eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2) (eq_attr "mode" "RVVM2QI,RVVMF4BI") (const_int 4) @@ -888,6 +904,8 @@ (symbol_ref "riscv_vector::FRM_DYN")] (symbol_ref "riscv_vector::FRM_NONE"))) +(include "thead-vector.md") + ;; ----------------------------------------------------------------- ;; ---- Miscellaneous Operations ;; ----------------------------------------------------------------- @@ -1097,7 +1115,7 @@ (define_insn "*mov_whole" [(set (match_operand:V_WHOLE 0 "reg_or_mem_operand" "=vr, m,vr") (match_operand:V_WHOLE 1 "reg_or_mem_operand" " m,vr,vr"))] - "TARGET_VECTOR" + "TARGET_VECTOR && !TARGET_XTHEADVECTOR" "@ vl%m1re.v\t%0,%1 vs%m1r.v\t%1,%0 @@ -1125,7 +1143,7 @@ (define_insn "*mov" [(set (match_operand:VB 0 "register_operand" "=vr") (match_operand:VB 1 "register_operand" " vr"))] - "TARGET_VECTOR" + "TARGET_VECTOR && !TARGET_XTHEADVECTOR" "vmv1r.v\t%0,%1" [(set_attr "type" "vmov") (set_attr "mode" "")]) @@ -3680,7 +3698,7 @@ (any_extend:VWEXTI (match_operand: 3 "register_operand" "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84, vr, vr")) (match_operand:VWEXTI 2 "vector_merge_operand" " vu, vu, 0, 0, vu, vu, 0, 0, vu, vu, 0, 0, vu, 0")))] - "TARGET_VECTOR" + "TARGET_VECTOR && !TARGET_XTHEADVECTOR" "vext.vf2\t%0,%3%p1" [(set_attr "type" "vext") (set_attr "mode" "") @@ -3701,7 +3719,7 @@ (any_extend:VQEXTI (match_operand: 3 "register_operand" "W43,W43,W43,W43,W86,W86,W86,W86, vr, vr")) (match_operand:VQEXTI 2 "vector_merge_operand" " vu, vu, 0, 0, vu, vu, 0, 0, vu, 0")))] - "TARGET_VECTOR" + "TARGET_VECTOR && !TARGET_XTHEADVECTOR" "vext.vf4\t%0,%3%p1" [(set_attr "type" "vext") (set_attr "mode" "") @@ -3722,7 +3740,7 @@ (any_extend:VOEXTI (match_operand: 3 "register_operand" "W87,W87,W87,W87, vr, vr")) (match_operand:VOEXTI 2 "vector_merge_operand" " vu, vu, 0, 0, vu, 0")))] - "TARGET_VECTOR" + "TARGET_VECTOR && !TARGET_XTHEADVECTOR" "vext.vf8\t%0,%3%p1" [(set_attr "type" "vext") (set_attr "mode" "") diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c index 2e0e12aa045..2eef9e1e1a8 100644 --- a/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/abi-1.c @@ -1,4 +1,4 @@ -/* { dg-do compile } */ +/* { dg-do compile { target { ! riscv_xtheadvector } } } */ /* { dg-skip-if "test rvv intrinsic" { *-*-* } { "*" } { "-march=rv*v*" } } */ void foo0 () {__rvv_bool64_t t;} diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c index 3d81b179235..ef329e30785 100644 --- a/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pragma-1.c @@ -1,4 +1,4 @@ /* { dg-do compile } */ /* { dg-options "-O3 -march=rv32gc -mabi=ilp32d" } */ -#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' extension enabled} } */ +#pragma riscv intrinsic "vector" /* { dg-error {#pragma riscv intrinsic' option 'vector' needs 'V' or 'XTHEADVECTOR' extension enabled} } */ diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp index 7f13ff0ca56..70df6b1401c 100644 --- a/gcc/testsuite/lib/target-supports.exp +++ b/gcc/testsuite/lib/target-supports.exp @@ -1952,6 +1952,18 @@ proc check_effective_target_riscv_zbb { } { }] } +# Return 1 if the target arch supports the XTheadVector extension, 0 otherwise. +# Cache the result. + +proc check_effective_target_riscv_xtheadvector { } { + return [check_no_compiler_messages riscv_ext_xtheadvector assembly { + #ifndef __riscv_xtheadvector + #error "Not __riscv_xtheadvector" + #endif + }] +} + + # Return 1 if we can execute code when using dg-add-options riscv_v proc check_effective_target_riscv_v_ok { } { From patchwork Mon Dec 25 06:31:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: joshua X-Patchwork-Id: 1880071 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Sz7PH6zKBz20RL for ; Mon, 25 Dec 2023 17:32:35 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id AEA363858296 for ; Mon, 25 Dec 2023 06:32:33 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from out30-98.freemail.mail.aliyun.com (out30-98.freemail.mail.aliyun.com [115.124.30.98]) by sourceware.org (Postfix) with ESMTPS id 35E653858C98 for ; Mon, 25 Dec 2023 06:32:01 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 35E653858C98 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=linux.alibaba.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 35E653858C98 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=115.124.30.98 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703485929; cv=none; b=TZr3Fu7jje5fXvx82zLaxHCAW2KEwFQW62rW4Ej/FjQ8ATt7VbrZqJpnVoPz/e+0ugCUYJPIIdrgJ7cdoapYrxWI6tcynSbUWhh9tNjy6COqPK3GAk1OCL9g0gA8PJ5wi4EHHFPoShNbcDr4SfrG+xwe5+jURuaMUZ78VkqMaCQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703485929; c=relaxed/simple; bh=dK64gKTPuY0IyKuen5PkvbE3X0eB6iyHy+R8g4E9Ar8=; h=From:To:Subject:Date:Message-Id:MIME-Version; b=ipWn1EjE/GzKIo1DSJW9fSFms/eaRN9EFqQFoCREfSGsEZuc3x/gCK354vAslcU8zg2wIdWKGJ+SIACzgDWzYaiMGCxNCmO7ny3b2FEhTll3fklZ0awz+pVvuvsyf9ENriJvU3sCNRkiBhO1Zzf3v6ZK48L+5QILDcSMbehJFp4= ARC-Authentication-Results: i=1; server2.sourceware.org X-Alimail-AntiSpam: AC=PASS; BC=-1|-1; BR=01201311R981e4; CH=green; DM=||false|; DS=||; FP=0|-1|-1|-1|0|-1|-1|-1; HT=ay29a033018045168; MF=cooper.joshua@linux.alibaba.com; NM=1; PH=DS; RN=11; SR=0; TI=SMTPD_---0Vz7XLFE_1703485916; Received: from localhost.localdomain(mailfrom:cooper.joshua@linux.alibaba.com fp:SMTPD_---0Vz7XLFE_1703485916) by smtp.aliyun-inc.com; Mon, 25 Dec 2023 14:31:58 +0800 From: "Jun Sha (Joshua)" To: gcc-patches@gcc.gnu.org Cc: jim.wilson.gcc@gmail.com, palmer@dabbelt.com, andrew@sifive.com, philipp.tomsich@vrull.eu, jeffreyalaw@gmail.com, christoph.muellner@vrull.eu, juzhe.zhong@rivai.ai, "Jun Sha (Joshua)" , Jin Ma , Xianmiao Qu Subject: [PATCH v4 6/6] RISC-V: Add support for xtheadvector-specific intrinsics. Date: Mon, 25 Dec 2023 14:31:50 +0800 Message-Id: <20231225063150.820-1-cooper.joshua@linux.alibaba.com> X-Mailer: git-send-email 2.27.0.windows.1 In-Reply-To: <20231220123656.661-1-cooper.joshua@linux.alibaba.com> References: <20231220123656.661-1-cooper.joshua@linux.alibaba.com> MIME-Version: 1.0 X-Spam-Status: No, score=-20.4 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE, UNPARSEABLE_RELAY, USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org This patch only involves the generation of xtheadvector special load/store instructions and vext instructions. gcc/ChangeLog: * config/riscv/riscv-vector-builtins-bases.cc (class th_loadstore_width): Define new builtin bases. (BASE): Define new builtin bases. * config/riscv/riscv-vector-builtins-bases.h: Define new builtin class. * config/riscv/riscv-vector-builtins-functions.def (vlsegff): Include thead-vector-builtins-functions.def. * config/riscv/riscv-vector-builtins-shapes.cc (struct th_loadstore_width_def): Define new builtin shapes. (struct th_indexed_loadstore_width_def): Define new builtin shapes. (SHAPE): Define new builtin shapes. * config/riscv/riscv-vector-builtins-shapes.h: Define new builtin shapes. * config/riscv/riscv-vector-builtins-types.def (DEF_RVV_I8_OPS): Add datatypes for XTheadVector. (DEF_RVV_I16_OPS): Add datatypes for XTheadVector. (DEF_RVV_I32_OPS): Add datatypes for XTheadVector. (DEF_RVV_U8_OPS): Add datatypes for XTheadVector. (DEF_RVV_U16_OPS): Add datatypes for XTheadVector. (DEF_RVV_U32_OPS): Add datatypes for XTheadVector. (vint8m1_t): Add datatypes for XTheadVector. (vint8m2_t): Likewise. (vint8m4_t): Likewise. (vint8m8_t): Likewise. (vint16m1_t): Likewise. (vint16m2_t): Likewise. (vint16m4_t): Likewise. (vint16m8_t): Likewise. (vint32m1_t): Likewise. (vint32m2_t): Likewise. (vint32m4_t): Likewise. (vint32m8_t): Likewise. (vint64m1_t): Likewise. (vint64m2_t): Likewise. (vint64m4_t): Likewise. (vint64m8_t): Likewise. (vuint8m1_t): Likewise. (vuint8m2_t): Likewise. (vuint8m4_t): Likewise. (vuint8m8_t): Likewise. (vuint16m1_t): Likewise. (vuint16m2_t): Likewise. (vuint16m4_t): Likewise. (vuint16m8_t): Likewise. (vuint32m1_t): Likewise. (vuint32m2_t): Likewise. (vuint32m4_t): Likewise. (vuint32m8_t): Likewise. (vuint64m1_t): Likewise. (vuint64m2_t): Likewise. (vuint64m4_t): Likewise. (vuint64m8_t): Likewise. * config/riscv/riscv-vector-builtins.cc (DEF_RVV_I8_OPS): Add datatypes for XTheadVector. (DEF_RVV_I16_OPS): Add datatypes for XTheadVector. (DEF_RVV_I32_OPS): Add datatypes for XTheadVector. (DEF_RVV_U8_OPS): Add datatypes for XTheadVector. (DEF_RVV_U16_OPS): Add datatypes for XTheadVector. (DEF_RVV_U32_OPS): Add datatypes for XTheadVector. * config/riscv/thead-vector-builtins-functions.def: New file. * config/riscv/thead-vector.md: Add new patterns. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/xtheadvector/vlb-vsb.c: New test. * gcc.target/riscv/rvv/xtheadvector/vlbu-vsb.c: New test. * gcc.target/riscv/rvv/xtheadvector/vlh-vsh.c: New test. * gcc.target/riscv/rvv/xtheadvector/vlhu-vsh.c: New test. * gcc.target/riscv/rvv/xtheadvector/vlw-vsw.c: New test. * gcc.target/riscv/rvv/xtheadvector/vlwu-vsw.c: New test. Co-authored-by: Jin Ma Co-authored-by: Xianmiao Qu Co-authored-by: Christoph Müllner --- gcc/config.gcc | 2 +- .../riscv/riscv-vector-builtins-shapes.cc | 126 +++++++ .../riscv/riscv-vector-builtins-shapes.h | 3 + .../riscv/riscv-vector-builtins-types.def | 120 +++++++ gcc/config/riscv/riscv-vector-builtins.cc | 313 +++++++++++++++++- gcc/config/riscv/riscv-vector-builtins.h | 3 + gcc/config/riscv/t-riscv | 16 + .../riscv/thead-vector-builtins-functions.def | 39 +++ gcc/config/riscv/thead-vector-builtins.cc | 200 +++++++++++ gcc/config/riscv/thead-vector-builtins.h | 64 ++++ gcc/config/riscv/thead-vector.md | 255 +++++++++++++- 11 files changed, 1138 insertions(+), 3 deletions(-) create mode 100644 gcc/config/riscv/thead-vector-builtins-functions.def create mode 100644 gcc/config/riscv/thead-vector-builtins.cc create mode 100644 gcc/config/riscv/thead-vector-builtins.h diff --git a/gcc/config.gcc b/gcc/config.gcc index 1445d98c147..4478395ab77 100644 --- a/gcc/config.gcc +++ b/gcc/config.gcc @@ -547,7 +547,7 @@ riscv*) extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o riscv-selftests.o riscv-string.o" extra_objs="${extra_objs} riscv-v.o riscv-vsetvl.o riscv-vector-costs.o riscv-avlprop.o" extra_objs="${extra_objs} riscv-vector-builtins.o riscv-vector-builtins-shapes.o riscv-vector-builtins-bases.o" - extra_objs="${extra_objs} thead.o riscv-target-attr.o" + extra_objs="${extra_objs} thead.o riscv-target-attr.o thead-vector-builtins.o" d_target_objs="riscv-d.o" extra_headers="riscv_vector.h riscv_th_vector.h" target_gtfiles="$target_gtfiles \$(srcdir)/config/riscv/riscv-vector-builtins.cc" diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.cc b/gcc/config/riscv/riscv-vector-builtins-shapes.cc index 6b49404a1fa..7d7c1f6f4b1 100644 --- a/gcc/config/riscv/riscv-vector-builtins-shapes.cc +++ b/gcc/config/riscv/riscv-vector-builtins-shapes.cc @@ -211,6 +211,104 @@ struct indexed_loadstore_def : public function_shape } }; +/* th_loadstore_width_def class. */ +struct th_loadstore_width_def : public build_base +{ + void build (function_builder &b, + const function_group_info &group) const override + { + /* Report an error if there is no xtheadvector. */ + if (!TARGET_XTHEADVECTOR) + return; + + build_all (b, group); + } + + char *get_name (function_builder &b, const function_instance &instance, + bool overloaded_p) const override + { + /* Report an error if there is no xtheadvector. */ + if (!TARGET_XTHEADVECTOR) + return nullptr; + + /* Return nullptr if it can not be overloaded. */ + if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred)) + return nullptr; + + b.append_base_name (instance.base_name); + + /* vop_v --> vop_v_. */ + if (!overloaded_p) + { + /* vop --> vop_v. */ + b.append_name (operand_suffixes[instance.op_info->op]); + /* vop_v --> vop_v_. */ + b.append_name (type_suffixes[instance.type.index].vector); + } + + /* According to rvv-intrinsic-doc, it does not add "_m" suffix + for vop_m C++ overloaded API. */ + if (overloaded_p && instance.pred == PRED_TYPE_m) + return b.finish_name (); + b.append_name (predication_suffixes[instance.pred]); + return b.finish_name (); + } +}; + + +/* th_indexed_loadstore_width_def class. */ +struct th_indexed_loadstore_width_def : public function_shape +{ + void build (function_builder &b, + const function_group_info &group) const override + { + /* Report an error if there is no xtheadvector. */ + if (!TARGET_XTHEADVECTOR) + return; + + for (unsigned int pred_idx = 0; group.preds[pred_idx] != NUM_PRED_TYPES; + ++pred_idx) + { + for (unsigned int vec_type_idx = 0; + group.ops_infos.types[vec_type_idx].index != NUM_VECTOR_TYPES; + ++vec_type_idx) + { + tree index_type = group.ops_infos.args[1].get_tree_type ( + group.ops_infos.types[vec_type_idx].index); + if (!index_type) + continue; + build_one (b, group, pred_idx, vec_type_idx); + } + } + } + + char *get_name (function_builder &b, const function_instance &instance, + bool overloaded_p) const override + { + + /* Return nullptr if it can not be overloaded. */ + if (overloaded_p && !instance.base->can_be_overloaded_p (instance.pred)) + return nullptr; + + b.append_base_name (instance.base_name); + /* vop_v --> vop_v_. */ + if (!overloaded_p) + { + /* vop --> vop_v. */ + b.append_name (operand_suffixes[instance.op_info->op]); + /* vop_v --> vop_v_. */ + b.append_name (type_suffixes[instance.type.index].vector); + } + + /* According to rvv-intrinsic-doc, it does not add "_m" suffix + for vop_m C++ overloaded API. */ + if (overloaded_p && instance.pred == PRED_TYPE_m) + return b.finish_name (); + b.append_name (predication_suffixes[instance.pred]); + return b.finish_name (); + } +}; + /* alu_def class. */ struct alu_def : public build_base { @@ -632,6 +730,31 @@ struct reduc_alu_def : public build_base } }; +/* th_extract_def class. */ +struct th_extract_def : public build_base +{ + void build (function_builder &b, + const function_group_info &group) const override + { + /* Report an error if there is no xtheadvector. */ + if (!TARGET_XTHEADVECTOR) + return; + + build_all (b, group); + } + + char *get_name (function_builder &b, const function_instance &instance, + bool overloaded_p) const override + { + b.append_base_name (instance.base_name); + if (overloaded_p) + return b.finish_name (); + b.append_name (type_suffixes[instance.type.index].vector); + b.append_name (type_suffixes[instance.type.index].scalar); + return b.finish_name (); + } +}; + /* scalar_move_def class. */ struct scalar_move_def : public build_base { @@ -1011,6 +1134,8 @@ SHAPE(vsetvl, vsetvl) SHAPE(vsetvl, vsetvlmax) SHAPE(loadstore, loadstore) SHAPE(indexed_loadstore, indexed_loadstore) +SHAPE(th_loadstore_width, th_loadstore_width) +SHAPE(th_indexed_loadstore_width, th_indexed_loadstore_width) SHAPE(alu, alu) SHAPE(alu_frm, alu_frm) SHAPE(widen_alu, widen_alu) @@ -1023,6 +1148,7 @@ SHAPE(move, move) SHAPE(mask_alu, mask_alu) SHAPE(reduc_alu, reduc_alu) SHAPE(reduc_alu_frm, reduc_alu_frm) +SHAPE(th_extract, th_extract) SHAPE(scalar_move, scalar_move) SHAPE(vundefined, vundefined) SHAPE(misc, misc) diff --git a/gcc/config/riscv/riscv-vector-builtins-shapes.h b/gcc/config/riscv/riscv-vector-builtins-shapes.h index df9884bb572..a822ba05bdd 100644 --- a/gcc/config/riscv/riscv-vector-builtins-shapes.h +++ b/gcc/config/riscv/riscv-vector-builtins-shapes.h @@ -28,6 +28,8 @@ extern const function_shape *const vsetvl; extern const function_shape *const vsetvlmax; extern const function_shape *const loadstore; extern const function_shape *const indexed_loadstore; +extern const function_shape *const th_loadstore_width; +extern const function_shape *const th_indexed_loadstore_width; extern const function_shape *const alu; extern const function_shape *const alu_frm; extern const function_shape *const widen_alu; @@ -41,6 +43,7 @@ extern const function_shape *const mask_alu; extern const function_shape *const reduc_alu; extern const function_shape *const reduc_alu_frm; extern const function_shape *const scalar_move; +extern const function_shape *const th_extract; extern const function_shape *const vundefined; extern const function_shape *const misc; extern const function_shape *const vset; diff --git a/gcc/config/riscv/riscv-vector-builtins-types.def b/gcc/config/riscv/riscv-vector-builtins-types.def index 6aa45ae9a7e..e373d29e51c 100644 --- a/gcc/config/riscv/riscv-vector-builtins-types.def +++ b/gcc/config/riscv/riscv-vector-builtins-types.def @@ -24,12 +24,48 @@ along with GCC; see the file COPYING3. If not see #define DEF_RVV_I_OPS(TYPE, REQUIRE) #endif +/* Use "DEF_RVV_I8_OPS" macro include some signed integer (i8/i16/i32/i64) + which will be iterated and registered as intrinsic functions. */ +#ifndef DEF_RVV_I8_OPS +#define DEF_RVV_I8_OPS(TYPE, REQUIRE) +#endif + +/* Use "DEF_RVV_I16_OPS" macro include some signed integer (i16/i32/i64) + which will be iterated and registered as intrinsic functions. */ +#ifndef DEF_RVV_I16_OPS +#define DEF_RVV_I16_OPS(TYPE, REQUIRE) +#endif + +/* Use "DEF_RVV_I32_OPS" macro include some signed integer (i32/i64) + which will be iterated and registered as intrinsic functions. */ +#ifndef DEF_RVV_I32_OPS +#define DEF_RVV_I32_OPS(TYPE, REQUIRE) +#endif + /* Use "DEF_RVV_U_OPS" macro include all unsigned integer which will be iterated and registered as intrinsic functions. */ #ifndef DEF_RVV_U_OPS #define DEF_RVV_U_OPS(TYPE, REQUIRE) #endif +/* Use "DEF_RVV_U8_OPS" macro include some unsigned integer (i8/i16/i32/i64) + which will be iterated and registered as intrinsic functions. */ +#ifndef DEF_RVV_U8_OPS +#define DEF_RVV_U8_OPS(TYPE, REQUIRE) +#endif + +/* Use "DEF_RVV_U16_OPS" macro include some unsigned integer (i16/i32/i64) + which will be iterated and registered as intrinsic functions. */ +#ifndef DEF_RVV_U16_OPS +#define DEF_RVV_U16_OPS(TYPE, REQUIRE) +#endif + +/* Use "DEF_RVV_U32_OPS" macro include some unsigned integer (i32/i64) + which will be iterated and registered as intrinsic functions. */ +#ifndef DEF_RVV_U32_OPS +#define DEF_RVV_U32_OPS(TYPE, REQUIRE) +#endif + /* Use "DEF_RVV_F_OPS" macro include all floating-point which will be iterated and registered as intrinsic functions. */ #ifndef DEF_RVV_F_OPS @@ -362,6 +398,45 @@ DEF_RVV_I_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64) DEF_RVV_I_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64) DEF_RVV_I_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_I8_OPS (vint8m1_t, 0) +DEF_RVV_I8_OPS (vint8m2_t, 0) +DEF_RVV_I8_OPS (vint8m4_t, 0) +DEF_RVV_I8_OPS (vint8m8_t, 0) +DEF_RVV_I8_OPS (vint16m1_t, 0) +DEF_RVV_I8_OPS (vint16m2_t, 0) +DEF_RVV_I8_OPS (vint16m4_t, 0) +DEF_RVV_I8_OPS (vint16m8_t, 0) +DEF_RVV_I8_OPS (vint32m1_t, 0) +DEF_RVV_I8_OPS (vint32m2_t, 0) +DEF_RVV_I8_OPS (vint32m4_t, 0) +DEF_RVV_I8_OPS (vint32m8_t, 0) +DEF_RVV_I8_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_I8_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_I8_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_I8_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64) + +DEF_RVV_I16_OPS (vint16m1_t, 0) +DEF_RVV_I16_OPS (vint16m2_t, 0) +DEF_RVV_I16_OPS (vint16m4_t, 0) +DEF_RVV_I16_OPS (vint16m8_t, 0) +DEF_RVV_I16_OPS (vint32m1_t, 0) +DEF_RVV_I16_OPS (vint32m2_t, 0) +DEF_RVV_I16_OPS (vint32m4_t, 0) +DEF_RVV_I16_OPS (vint32m8_t, 0) +DEF_RVV_I16_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_I16_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_I16_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_I16_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64) + +DEF_RVV_I32_OPS (vint32m1_t, 0) +DEF_RVV_I32_OPS (vint32m2_t, 0) +DEF_RVV_I32_OPS (vint32m4_t, 0) +DEF_RVV_I32_OPS (vint32m8_t, 0) +DEF_RVV_I32_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_I32_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_I32_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_I32_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64) + DEF_RVV_U_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) DEF_RVV_U_OPS (vuint8mf4_t, 0) DEF_RVV_U_OPS (vuint8mf2_t, 0) @@ -385,6 +460,45 @@ DEF_RVV_U_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64) DEF_RVV_U_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64) DEF_RVV_U_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_U8_OPS (vuint8m1_t, 0) +DEF_RVV_U8_OPS (vuint8m2_t, 0) +DEF_RVV_U8_OPS (vuint8m4_t, 0) +DEF_RVV_U8_OPS (vuint8m8_t, 0) +DEF_RVV_U8_OPS (vuint16m1_t, 0) +DEF_RVV_U8_OPS (vuint16m2_t, 0) +DEF_RVV_U8_OPS (vuint16m4_t, 0) +DEF_RVV_U8_OPS (vuint16m8_t, 0) +DEF_RVV_U8_OPS (vuint32m1_t, 0) +DEF_RVV_U8_OPS (vuint32m2_t, 0) +DEF_RVV_U8_OPS (vuint32m4_t, 0) +DEF_RVV_U8_OPS (vuint32m8_t, 0) +DEF_RVV_U8_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_U8_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_U8_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_U8_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64) + +DEF_RVV_U16_OPS (vuint16m1_t, 0) +DEF_RVV_U16_OPS (vuint16m2_t, 0) +DEF_RVV_U16_OPS (vuint16m4_t, 0) +DEF_RVV_U16_OPS (vuint16m8_t, 0) +DEF_RVV_U16_OPS (vuint32m1_t, 0) +DEF_RVV_U16_OPS (vuint32m2_t, 0) +DEF_RVV_U16_OPS (vuint32m4_t, 0) +DEF_RVV_U16_OPS (vuint32m8_t, 0) +DEF_RVV_U16_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_U16_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_U16_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_U16_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64) + +DEF_RVV_U32_OPS (vuint32m1_t, 0) +DEF_RVV_U32_OPS (vuint32m2_t, 0) +DEF_RVV_U32_OPS (vuint32m4_t, 0) +DEF_RVV_U32_OPS (vuint32m8_t, 0) +DEF_RVV_U32_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_U32_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_U32_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64) +DEF_RVV_U32_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64) + DEF_RVV_F_OPS (vfloat16mf4_t, RVV_REQUIRE_ELEN_FP_16 | RVV_REQUIRE_MIN_VLEN_64) DEF_RVV_F_OPS (vfloat16mf2_t, RVV_REQUIRE_ELEN_FP_16) DEF_RVV_F_OPS (vfloat16m1_t, RVV_REQUIRE_ELEN_FP_16) @@ -1356,7 +1470,13 @@ DEF_RVV_TUPLE_OPS (vfloat64m2x4_t, RVV_REQUIRE_ELEN_FP_64) DEF_RVV_TUPLE_OPS (vfloat64m4x2_t, RVV_REQUIRE_ELEN_FP_64) #undef DEF_RVV_I_OPS +#undef DEF_RVV_I8_OPS +#undef DEF_RVV_I16_OPS +#undef DEF_RVV_I32_OPS #undef DEF_RVV_U_OPS +#undef DEF_RVV_U8_OPS +#undef DEF_RVV_U16_OPS +#undef DEF_RVV_U32_OPS #undef DEF_RVV_F_OPS #undef DEF_RVV_B_OPS #undef DEF_RVV_WEXTI_OPS diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv/riscv-vector-builtins.cc index 4e2c66c2de7..461447afdef 100644 --- a/gcc/config/riscv/riscv-vector-builtins.cc +++ b/gcc/config/riscv/riscv-vector-builtins.cc @@ -51,6 +51,7 @@ #include "riscv-vector-builtins.h" #include "riscv-vector-builtins-shapes.h" #include "riscv-vector-builtins-bases.h" +#include "thead-vector-builtins.h" using namespace riscv_vector; @@ -246,6 +247,63 @@ static const rvv_type_info iu_ops[] = { #include "riscv-vector-builtins-types.def" {NUM_VECTOR_TYPES, 0}}; +/* A list of all integer will be registered for intrinsic functions. */ +static const rvv_type_info i8_ops[] = { +#define DEF_RVV_I8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE}, +#include "riscv-vector-builtins-types.def" + {NUM_VECTOR_TYPES, 0}}; + +/* A list of all integer will be registered for intrinsic functions. */ +static const rvv_type_info i16_ops[] = { +#define DEF_RVV_I16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE}, +#include "riscv-vector-builtins-types.def" + {NUM_VECTOR_TYPES, 0}}; + +/* A list of all integer will be registered for intrinsic functions. */ +static const rvv_type_info i32_ops[] = { +#define DEF_RVV_I32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE}, +#include "riscv-vector-builtins-types.def" + {NUM_VECTOR_TYPES, 0}}; + +/* A list of all integer will be registered for intrinsic functions. */ +static const rvv_type_info u8_ops[] = { +#define DEF_RVV_U8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE}, +#include "riscv-vector-builtins-types.def" + {NUM_VECTOR_TYPES, 0}}; + +/* A list of all integer will be registered for intrinsic functions. */ +static const rvv_type_info u16_ops[] = { +#define DEF_RVV_U16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE}, +#include "riscv-vector-builtins-types.def" + {NUM_VECTOR_TYPES, 0}}; + +/* A list of all integer will be registered for intrinsic functions. */ +static const rvv_type_info u32_ops[] = { +#define DEF_RVV_U32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE}, +#include "riscv-vector-builtins-types.def" + {NUM_VECTOR_TYPES, 0}}; + +/* A list of all integer will be registered for intrinsic functions. */ +static const rvv_type_info iu8_ops[] = { +#define DEF_RVV_I8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE}, +#define DEF_RVV_U8_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE}, +#include "riscv-vector-builtins-types.def" + {NUM_VECTOR_TYPES, 0}}; + +/* A list of all integer will be registered for intrinsic functions. */ +static const rvv_type_info iu16_ops[] = { +#define DEF_RVV_I16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE}, +#define DEF_RVV_U16_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE}, +#include "riscv-vector-builtins-types.def" + {NUM_VECTOR_TYPES, 0}}; + +/* A list of all integer will be registered for intrinsic functions. */ +static const rvv_type_info iu32_ops[] = { +#define DEF_RVV_I32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE}, +#define DEF_RVV_U32_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE}, +#include "riscv-vector-builtins-types.def" + {NUM_VECTOR_TYPES, 0}}; + /* A list of all types will be registered for intrinsic functions. */ static const rvv_type_info all_ops[] = { #define DEF_RVV_I_OPS(TYPE, REQUIRE) {VECTOR_TYPE_##TYPE, REQUIRE}, @@ -913,7 +971,32 @@ static CONSTEXPR const rvv_arg_type_info tuple_vcreate_args[] /* A list of args for vector_type func (vector_type) function. */ static CONSTEXPR const rvv_arg_type_info ext_vcreate_args[] - = {rvv_arg_type_info (RVV_BASE_vector), + = {rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end}; + +/* A list of args for vector_type func (const scalar_type *, size_t) + * function. */ +static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_size_args[] + = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr), + rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info_end}; + +/* A list of args for vector_type func (const scalar_type *, eew8_index_type) + * function. */ +static CONSTEXPR const rvv_arg_type_info scalar_const_ptr_index_args[] + = {rvv_arg_type_info (RVV_BASE_scalar_const_ptr), + rvv_arg_type_info (RVV_BASE_unsigned_vector), rvv_arg_type_info_end}; + +/* A list of args for void func (scalar_type *, eew8_index_type, vector_type) + * function. */ +static CONSTEXPR const rvv_arg_type_info scalar_ptr_index_args[] + = {rvv_arg_type_info (RVV_BASE_scalar_ptr), + rvv_arg_type_info (RVV_BASE_unsigned_vector), + rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end}; + +/* A list of args for void func (scalar_type *, size_t, vector_type) + * function. */ +static CONSTEXPR const rvv_arg_type_info scalar_ptr_size_args[] + = {rvv_arg_type_info (RVV_BASE_scalar_ptr), + rvv_arg_type_info (RVV_BASE_size), rvv_arg_type_info (RVV_BASE_vector), rvv_arg_type_info_end}; /* A list of none preds that will be registered for intrinsic functions. */ @@ -1429,6 +1512,14 @@ static CONSTEXPR const rvv_op_info iu_shift_vvv_ops rvv_arg_type_info (RVV_BASE_vector), /* Return type */ shift_vv_args /* Args */}; +/* A static operand information for scalar_type func (vector_type, size_t) + * function registration. */ +static CONSTEXPR const rvv_op_info iu_x_s_u_ops + = {iu_ops, /* Types */ + OP_TYPE_vx, /* Suffix */ + rvv_arg_type_info (RVV_BASE_scalar), /* Return type */ + v_size_args /* Args */}; + /* A static operand information for vector_type func (vector_type, size_t) * function registration. */ static CONSTEXPR const rvv_op_info iu_shift_vvx_ops @@ -2604,6 +2695,222 @@ static CONSTEXPR const rvv_op_info all_v_vcreate_lmul4_x2_ops rvv_arg_type_info (RVV_BASE_vlmul_ext_x2), /* Return type */ ext_vcreate_args /* Args */}; +/* A static operand information for vector_type func (const scalar_type *) + * function registration. */ +static CONSTEXPR const rvv_op_info i8_v_scalar_const_ptr_ops + = {i8_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_vector), /* Return type */ + scalar_const_ptr_args /* Args */}; + +/* A static operand information for vector_type func (const scalar_type *) + * function registration. */ +static CONSTEXPR const rvv_op_info i16_v_scalar_const_ptr_ops + = {i16_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_vector), /* Return type */ + scalar_const_ptr_args /* Args */}; + +/* A static operand information for vector_type func (const scalar_type *) + * function registration. */ +static CONSTEXPR const rvv_op_info i32_v_scalar_const_ptr_ops + = {i32_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_vector), /* Return type */ + scalar_const_ptr_args /* Args */}; + +/* A static operand information for vector_type func (const scalar_type *) + * function registration. */ +static CONSTEXPR const rvv_op_info u8_v_scalar_const_ptr_ops + = {u8_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_vector), /* Return type */ + scalar_const_ptr_args /* Args */}; + +/* A static operand information for vector_type func (const scalar_type *) + * function registration. */ +static CONSTEXPR const rvv_op_info u16_v_scalar_const_ptr_ops + = {u16_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_vector), /* Return type */ + scalar_const_ptr_args /* Args */}; + +/* A static operand information for vector_type func (const scalar_type *) + * function registration. */ +static CONSTEXPR const rvv_op_info u32_v_scalar_const_ptr_ops + = {u32_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_vector), /* Return type */ + scalar_const_ptr_args /* Args */}; + +/* A static operand information for vector_type func (const scalar_type *, + * size_t) function registration. */ +static CONSTEXPR const rvv_op_info i8_v_scalar_const_ptr_size_ops + = {i8_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_vector), /* Return type */ + scalar_const_ptr_size_args /* Args */}; + +/* A static operand information for vector_type func (const scalar_type *, + * size_t) function registration. */ +static CONSTEXPR const rvv_op_info i16_v_scalar_const_ptr_size_ops + = {i16_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_vector), /* Return type */ + scalar_const_ptr_size_args /* Args */}; + +/* A static operand information for vector_type func (const scalar_type *, + * size_t) function registration. */ +static CONSTEXPR const rvv_op_info i32_v_scalar_const_ptr_size_ops + = {i32_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_vector), /* Return type */ + scalar_const_ptr_size_args /* Args */}; + +/* A static operand information for vector_type func (const scalar_type *, + * size_t) function registration. */ +static CONSTEXPR const rvv_op_info u8_v_scalar_const_ptr_size_ops + = {u8_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_vector), /* Return type */ + scalar_const_ptr_size_args /* Args */}; + +/* A static operand information for vector_type func (const scalar_type *, + * size_t) function registration. */ +static CONSTEXPR const rvv_op_info u16_v_scalar_const_ptr_size_ops + = {u16_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_vector), /* Return type */ + scalar_const_ptr_size_args /* Args */}; + +/* A static operand information for vector_type func (const scalar_type *, + * size_t) function registration. */ +static CONSTEXPR const rvv_op_info u32_v_scalar_const_ptr_size_ops + = {u32_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_vector), /* Return type */ + scalar_const_ptr_size_args /* Args */}; + +/* A static operand information for vector_type func (const scalar_type *, + * eew8_index_type) function registration. */ +static CONSTEXPR const rvv_op_info i8_v_scalar_const_ptr_index_ops + = {i8_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_vector), /* Return type */ + scalar_const_ptr_index_args /* Args */}; + +/* A static operand information for vector_type func (const scalar_type *, + * eew8_index_type) function registration. */ +static CONSTEXPR const rvv_op_info u8_v_scalar_const_ptr_index_ops + = {u8_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_vector), /* Return type */ + scalar_const_ptr_index_args /* Args */}; + +/* A static operand information for vector_type func (const scalar_type *, + * eew8_index_type) function registration. */ +static CONSTEXPR const rvv_op_info i16_v_scalar_const_ptr_index_ops + = {i16_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_vector), /* Return type */ + scalar_const_ptr_index_args /* Args */}; + +/* A static operand information for vector_type func (const scalar_type *, + * eew8_index_type) function registration. */ +static CONSTEXPR const rvv_op_info u16_v_scalar_const_ptr_index_ops + = {u16_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_vector), /* Return type */ + scalar_const_ptr_index_args /* Args */}; + +/* A static operand information for vector_type func (const scalar_type *, + * eew8_index_type) function registration. */ +static CONSTEXPR const rvv_op_info i32_v_scalar_const_ptr_index_ops + = {i32_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_vector), /* Return type */ + scalar_const_ptr_index_args /* Args */}; + +/* A static operand information for vector_type func (const scalar_type *, + * eew8_index_type) function registration. */ +static CONSTEXPR const rvv_op_info u32_v_scalar_const_ptr_index_ops + = {u32_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_vector), /* Return type */ + scalar_const_ptr_index_args /* Args */}; + +/* A static operand information for void func (scalar_type *, eew8_index_type, + * vector_type) function registration. */ +static CONSTEXPR const rvv_op_info iu8_v_scalar_ptr_index_ops + = {iu8_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_void), /* Return type */ + scalar_ptr_index_args /* Args */}; + +/* A static operand information for void func (scalar_type *, eew16_index_type, + * vector_type) function registration. */ +static CONSTEXPR const rvv_op_info iu16_v_scalar_ptr_index_ops + = {iu16_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_void), /* Return type */ + scalar_ptr_index_args /* Args */}; + +/* A static operand information for void func (scalar_type *, eew32_index_type, + * vector_type) function registration. */ +static CONSTEXPR const rvv_op_info iu32_v_scalar_ptr_index_ops + = {iu32_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_void), /* Return type */ + scalar_ptr_index_args /* Args */}; + +/* A static operand information for void func (scalar_type *, vector_type, + * function registration. */ +static CONSTEXPR const rvv_op_info iu8_v_scalar_ptr_ops + = {iu8_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_void), /* Return type */ + scalar_ptr_args /* Args */}; + +/* A static operand information for void func (scalar_type *, vector_type) + * function registration. */ +static CONSTEXPR const rvv_op_info iu16_v_scalar_ptr_ops + = {iu16_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_void), /* Return type */ + scalar_ptr_args /* Args */}; + +/* A static operand information for void func (scalar_type *, vector_type) + * function registration. */ +static CONSTEXPR const rvv_op_info iu32_v_scalar_ptr_ops + = {iu32_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_void), /* Return type */ + scalar_ptr_args /* Args */}; + +/* A static operand information for void func (scalar_type *, size_t, + * vector_type) function registration. */ +static CONSTEXPR const rvv_op_info iu8_v_scalar_ptr_size_ops + = {iu8_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_void), /* Return type */ + scalar_ptr_size_args /* Args */}; + +/* A static operand information for void func (scalar_type *, size_t, + * vector_type) function registration. */ +static CONSTEXPR const rvv_op_info iu16_v_scalar_ptr_size_ops + = {iu16_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_void), /* Return type */ + scalar_ptr_size_args /* Args */}; + +/* A static operand information for void func (scalar_type *, size_t, + * vector_type) function registration. */ +static CONSTEXPR const rvv_op_info iu32_v_scalar_ptr_size_ops + = {iu32_ops, /* Types */ + OP_TYPE_v, /* Suffix */ + rvv_arg_type_info (RVV_BASE_void), /* Return type */ + scalar_ptr_size_args /* Args */}; + /* A list of all RVV base function types. */ static CONSTEXPR const function_type_info function_types[] = { #define DEF_RVV_TYPE_INDEX( \ @@ -2687,6 +2994,10 @@ static function_group_info function_groups[] = { #define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO) \ {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS}, #include "riscv-vector-builtins-functions.def" +#undef DEF_RVV_FUNCTION +#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO) \ + {#NAME, &bases::NAME, &shapes::SHAPE, PREDS, OPS_INFO, REQUIRED_EXTENSIONS}, +#include "thead-vector-builtins-functions.def" }; /* The RVV types, with their built-in diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/riscv-vector-builtins.h index 4f38c09d73d..234b6f7a196 100644 --- a/gcc/config/riscv/riscv-vector-builtins.h +++ b/gcc/config/riscv/riscv-vector-builtins.h @@ -123,6 +123,7 @@ enum required_ext ZVKNHB_EXT, /* Crypto vector Zvknhb sub-ext */ ZVKSED_EXT, /* Crypto vector Zvksed sub-ext */ ZVKSH_EXT, /* Crypto vector Zvksh sub-ext */ + XTHEADVECTOR_EXT, /* XTheadVector extension */ }; /* Enumerates the RVV operand types. */ @@ -252,6 +253,8 @@ struct function_group_info return TARGET_ZVKSED; case ZVKSH_EXT: return TARGET_ZVKSH; + case XTHEADVECTOR_EXT: + return TARGET_XTHEADVECTOR; default: gcc_unreachable (); } diff --git a/gcc/config/riscv/t-riscv b/gcc/config/riscv/t-riscv index 067771e3c97..09512092056 100644 --- a/gcc/config/riscv/t-riscv +++ b/gcc/config/riscv/t-riscv @@ -23,6 +23,8 @@ riscv-vector-builtins.o: $(srcdir)/config/riscv/riscv-vector-builtins.cc \ $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \ $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \ $(srcdir)/config/riscv/riscv-vector-builtins-types.def \ + $(srcdir)/config/riscv/thead-vector-builtins.h \ + $(srcdir)/config/riscv/thead-vector-builtins-functions.def \ $(RISCV_BUILTINS_H) $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \ $(srcdir)/config/riscv/riscv-vector-builtins.cc @@ -50,6 +52,20 @@ riscv-vector-builtins-bases.o: \ $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \ $(srcdir)/config/riscv/riscv-vector-builtins-bases.cc +thead-vector-builtins.o: \ + $(srcdir)/config/riscv/thead-vector-builtins.cc \ + $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) $(RTL_H) \ + $(TM_P_H) memmodel.h insn-codes.h $(OPTABS_H) $(RECOG_H) \ + $(EXPR_H) $(BASIC_BLOCK_H) $(FUNCTION_H) fold-const.h $(GIMPLE_H) \ + gimple-iterator.h gimplify.h explow.h $(EMIT_RTL_H) tree-vector-builder.h \ + rtx-vector-builder.h \ + $(srcdir)/config/riscv/riscv-vector-builtins-shapes.h \ + $(srcdir)/config/riscv/riscv-vector-builtins-bases.h \ + $(srcdir)/config/riscv/thead-vector-builtins.h \ + $(RISCV_BUILTINS_H) + $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \ + $(srcdir)/config/riscv/thead-vector-builtins.cc + riscv-sr.o: $(srcdir)/config/riscv/riscv-sr.cc $(CONFIG_H) \ $(SYSTEM_H) $(TM_H) $(COMPILER) -c $(ALL_COMPILERFLAGS) $(ALL_CPPFLAGS) $(INCLUDES) \ diff --git a/gcc/config/riscv/thead-vector-builtins-functions.def b/gcc/config/riscv/thead-vector-builtins-functions.def new file mode 100644 index 00000000000..667820d4c3e --- /dev/null +++ b/gcc/config/riscv/thead-vector-builtins-functions.def @@ -0,0 +1,39 @@ +#ifndef DEF_RVV_FUNCTION +#define DEF_RVV_FUNCTION(NAME, SHAPE, PREDS, OPS_INFO) +#endif + +#define REQUIRED_EXTENSIONS XTHEADVECTOR_EXT +DEF_RVV_FUNCTION (th_vlb, th_loadstore_width, full_preds, i8_v_scalar_const_ptr_ops) +DEF_RVV_FUNCTION (th_vlh, th_loadstore_width, full_preds, i16_v_scalar_const_ptr_ops) +DEF_RVV_FUNCTION (th_vlw, th_loadstore_width, full_preds, i32_v_scalar_const_ptr_ops) +DEF_RVV_FUNCTION (th_vlbu, th_loadstore_width, full_preds, u8_v_scalar_const_ptr_ops) +DEF_RVV_FUNCTION (th_vlhu, th_loadstore_width, full_preds, u16_v_scalar_const_ptr_ops) +DEF_RVV_FUNCTION (th_vlwu, th_loadstore_width, full_preds, u32_v_scalar_const_ptr_ops) +DEF_RVV_FUNCTION (th_vsb, th_loadstore_width, none_m_preds, iu8_v_scalar_ptr_ops) +DEF_RVV_FUNCTION (th_vsh, th_loadstore_width, none_m_preds, iu16_v_scalar_ptr_ops) +DEF_RVV_FUNCTION (th_vsw, th_loadstore_width, none_m_preds, iu32_v_scalar_ptr_ops) +DEF_RVV_FUNCTION (th_vlsb, th_loadstore_width, full_preds, i8_v_scalar_const_ptr_size_ops) +DEF_RVV_FUNCTION (th_vlsh, th_loadstore_width, full_preds, i16_v_scalar_const_ptr_size_ops) +DEF_RVV_FUNCTION (th_vlsw, th_loadstore_width, full_preds, i32_v_scalar_const_ptr_size_ops) +DEF_RVV_FUNCTION (th_vlsbu, th_loadstore_width, full_preds, u8_v_scalar_const_ptr_size_ops) +DEF_RVV_FUNCTION (th_vlshu, th_loadstore_width, full_preds, u16_v_scalar_const_ptr_size_ops) +DEF_RVV_FUNCTION (th_vlswu, th_loadstore_width, full_preds, u32_v_scalar_const_ptr_size_ops) +DEF_RVV_FUNCTION (th_vssb, th_loadstore_width, none_m_preds, iu8_v_scalar_ptr_size_ops) +DEF_RVV_FUNCTION (th_vssh, th_loadstore_width, none_m_preds, iu16_v_scalar_ptr_size_ops) +DEF_RVV_FUNCTION (th_vssw, th_loadstore_width, none_m_preds, iu32_v_scalar_ptr_size_ops) +DEF_RVV_FUNCTION (th_vlxb, th_indexed_loadstore_width, full_preds, i8_v_scalar_const_ptr_index_ops) +DEF_RVV_FUNCTION (th_vlxh, th_indexed_loadstore_width, full_preds, i16_v_scalar_const_ptr_index_ops) +DEF_RVV_FUNCTION (th_vlxw, th_indexed_loadstore_width, full_preds, i32_v_scalar_const_ptr_index_ops) +DEF_RVV_FUNCTION (th_vlxbu, th_indexed_loadstore_width, full_preds, u8_v_scalar_const_ptr_index_ops) +DEF_RVV_FUNCTION (th_vlxhu, th_indexed_loadstore_width, full_preds, u16_v_scalar_const_ptr_index_ops) +DEF_RVV_FUNCTION (th_vlxwu, th_indexed_loadstore_width, full_preds, u32_v_scalar_const_ptr_index_ops) +DEF_RVV_FUNCTION (th_vsxb, th_indexed_loadstore_width, none_m_preds, iu8_v_scalar_ptr_index_ops) +DEF_RVV_FUNCTION (th_vsxh, th_indexed_loadstore_width, none_m_preds, iu16_v_scalar_ptr_index_ops) +DEF_RVV_FUNCTION (th_vsxw, th_indexed_loadstore_width, none_m_preds, iu32_v_scalar_ptr_index_ops) +DEF_RVV_FUNCTION (th_vsuxb, th_indexed_loadstore_width, none_m_preds, iu8_v_scalar_ptr_index_ops) +DEF_RVV_FUNCTION (th_vsuxh, th_indexed_loadstore_width, none_m_preds, iu16_v_scalar_ptr_index_ops) +DEF_RVV_FUNCTION (th_vsuxw, th_indexed_loadstore_width, none_m_preds, iu32_v_scalar_ptr_index_ops) +DEF_RVV_FUNCTION (th_vext_x_v, th_extract, none_preds, iu_x_s_u_ops) +#undef REQUIRED_EXTENSIONS + +#undef DEF_RVV_FUNCTION diff --git a/gcc/config/riscv/thead-vector-builtins.cc b/gcc/config/riscv/thead-vector-builtins.cc new file mode 100644 index 00000000000..c0002f255ee --- /dev/null +++ b/gcc/config/riscv/thead-vector-builtins.cc @@ -0,0 +1,200 @@ +/* function_base implementation for RISC-V XTheadVector Extension + for GNU compiler. + Copyright (C) 2022-2023 Free Software Foundation, Inc. + Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head + Semiconductor Co., Ltd. + + This file is part of GCC. + + GCC is free software; you can redistribute it and/or modify it + under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 3, or (at your option) + any later version. + + GCC is distributed in the hope that it will be useful, but + WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + General Public License for more details. + + You should have received a copy of the GNU General Public License + along with GCC; see the file COPYING3. If not see + . */ + +#include "config.h" +#include "system.h" +#include "coretypes.h" +#include "tm.h" +#include "tree.h" +#include "rtl.h" +#include "tm_p.h" +#include "memmodel.h" +#include "insn-codes.h" +#include "optabs.h" +#include "recog.h" +#include "expr.h" +#include "basic-block.h" +#include "function.h" +#include "fold-const.h" +#include "gimple.h" +#include "gimple-iterator.h" +#include "gimplify.h" +#include "explow.h" +#include "emit-rtl.h" +#include "tree-vector-builder.h" +#include "rtx-vector-builder.h" +#include "riscv-vector-builtins.h" +#include "riscv-vector-builtins-shapes.h" +#include "riscv-vector-builtins-bases.h" +#include "thead-vector-builtins.h" + +using namespace riscv_vector; + +namespace riscv_vector { + +/* Implements + * th.vl(b/h/w)[u].v/th.vs(b/h/w)[u].v/th.vls(b/h/w)[u].v/th.vss(b/h/w)[u].v/ + * th.vlx(b/h/w)[u].v/th.vs[u]x(b/h/w).v + * codegen. */ +template +class th_loadstore_width : public function_base +{ +public: + bool apply_tail_policy_p () const override { return !STORE_P; } + bool apply_mask_policy_p () const override { return !STORE_P; } + + unsigned int call_properties (const function_instance &) const override + { + if (STORE_P) + return CP_WRITE_MEMORY; + else + return CP_READ_MEMORY; + } + + bool can_be_overloaded_p (enum predication_type_index pred) const override + { + if (STORE_P || LST_TYPE == LST_INDEXED) + return true; + return pred != PRED_TYPE_none; + } + + rtx expand (function_expander &e) const override + { + gcc_assert (TARGET_XTHEADVECTOR); + if (LST_TYPE == LST_INDEXED) + { + if (STORE_P) + return e.use_exact_insn ( + code_for_pred_indexed_store_width (UNSPEC, UNSPEC, + e.vector_mode ())); + else + return e.use_exact_insn ( + code_for_pred_indexed_load_width (UNSPEC, e.vector_mode ())); + } + else if (LST_TYPE == LST_STRIDED) + { + if (STORE_P) + return e.use_contiguous_store_insn ( + code_for_pred_strided_store_width (UNSPEC, e.vector_mode ())); + else + return e.use_contiguous_load_insn ( + code_for_pred_strided_load_width (UNSPEC, e.vector_mode ())); + } + else + { + if (STORE_P) + return e.use_contiguous_store_insn ( + code_for_pred_store_width (UNSPEC, e.vector_mode ())); + else + return e.use_contiguous_load_insn ( + code_for_pred_mov_width (UNSPEC, e.vector_mode ())); + } + } +}; + +/* Implements vext.x.v. */ +class th_extract : public function_base +{ +public: + bool apply_vl_p () const override { return false; } + bool apply_tail_policy_p () const override { return false; } + bool apply_mask_policy_p () const override { return false; } + bool use_mask_predication_p () const override { return false; } + bool has_merge_operand_p () const override { return false; } + + rtx expand (function_expander &e) const override + { + gcc_assert (TARGET_XTHEADVECTOR); + return e.use_exact_insn (code_for_pred_th_extract (e.vector_mode ())); + } +}; + +static CONSTEXPR const th_loadstore_width th_vlb_obj; +static CONSTEXPR const th_loadstore_width th_vlbu_obj; +static CONSTEXPR const th_loadstore_width th_vlh_obj; +static CONSTEXPR const th_loadstore_width th_vlhu_obj; +static CONSTEXPR const th_loadstore_width th_vlw_obj; +static CONSTEXPR const th_loadstore_width th_vlwu_obj; +static CONSTEXPR const th_loadstore_width th_vsb_obj; +static CONSTEXPR const th_loadstore_width th_vsh_obj; +static CONSTEXPR const th_loadstore_width th_vsw_obj; +static CONSTEXPR const th_loadstore_width th_vlsb_obj; +static CONSTEXPR const th_loadstore_width th_vlsbu_obj; +static CONSTEXPR const th_loadstore_width th_vlsh_obj; +static CONSTEXPR const th_loadstore_width th_vlshu_obj; +static CONSTEXPR const th_loadstore_width th_vlsw_obj; +static CONSTEXPR const th_loadstore_width th_vlswu_obj; +static CONSTEXPR const th_loadstore_width th_vssb_obj; +static CONSTEXPR const th_loadstore_width th_vssh_obj; +static CONSTEXPR const th_loadstore_width th_vssw_obj; +static CONSTEXPR const th_loadstore_width th_vlxb_obj; +static CONSTEXPR const th_loadstore_width th_vlxbu_obj; +static CONSTEXPR const th_loadstore_width th_vlxh_obj; +static CONSTEXPR const th_loadstore_width th_vlxhu_obj; +static CONSTEXPR const th_loadstore_width th_vlxw_obj; +static CONSTEXPR const th_loadstore_width th_vlxwu_obj; +static CONSTEXPR const th_loadstore_width th_vsxb_obj; +static CONSTEXPR const th_loadstore_width th_vsxh_obj; +static CONSTEXPR const th_loadstore_width th_vsxw_obj; +static CONSTEXPR const th_loadstore_width th_vsuxb_obj; +static CONSTEXPR const th_loadstore_width th_vsuxh_obj; +static CONSTEXPR const th_loadstore_width th_vsuxw_obj; +static CONSTEXPR const th_extract th_vext_x_v_obj; + +/* Declare the function base NAME, pointing it to an instance + of class _obj. */ +#define BASE(NAME) \ + namespace bases { const function_base *const NAME = &NAME##_obj; } + +BASE (th_vlb) +BASE (th_vlh) +BASE (th_vlw) +BASE (th_vlbu) +BASE (th_vlhu) +BASE (th_vlwu) +BASE (th_vsb) +BASE (th_vsh) +BASE (th_vsw) +BASE (th_vlsb) +BASE (th_vlsh) +BASE (th_vlsw) +BASE (th_vlsbu) +BASE (th_vlshu) +BASE (th_vlswu) +BASE (th_vssb) +BASE (th_vssh) +BASE (th_vssw) +BASE (th_vlxb) +BASE (th_vlxh) +BASE (th_vlxw) +BASE (th_vlxbu) +BASE (th_vlxhu) +BASE (th_vlxwu) +BASE (th_vsxb) +BASE (th_vsxh) +BASE (th_vsxw) +BASE (th_vsuxb) +BASE (th_vsuxh) +BASE (th_vsuxw) +BASE (th_vext_x_v) + +} // end namespace riscv_vector diff --git a/gcc/config/riscv/thead-vector-builtins.h b/gcc/config/riscv/thead-vector-builtins.h new file mode 100644 index 00000000000..4720c6334d8 --- /dev/null +++ b/gcc/config/riscv/thead-vector-builtins.h @@ -0,0 +1,64 @@ +/* function_base declaration for RISC-V XTheadVector Extension + for GNU compiler. + Copyright (C) 2022-2023 Free Software Foundation, Inc. + Contributed by Joshua (cooper.joshua@linux.alibaba.com), T-Head + Semiconductor Co., Ltd. + + This file is part of GCC. + + GCC is free software; you can redistribute it and/or modify it + under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 3, or (at your option) + any later version. + + GCC is distributed in the hope that it will be useful, but + WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + General Public License for more details. + + You should have received a copy of the GNU General Public License + along with GCC; see the file COPYING3. If not see + . */ + +#ifndef GCC_THEAD_VECTOR_BUILTINS_H +#define GCC_THEAD_VECTOR_BUILTINS_H + +namespace riscv_vector { + +namespace bases { +extern const function_base *const th_vlb; +extern const function_base *const th_vlh; +extern const function_base *const th_vlw; +extern const function_base *const th_vlbu; +extern const function_base *const th_vlhu; +extern const function_base *const th_vlwu; +extern const function_base *const th_vsb; +extern const function_base *const th_vsh; +extern const function_base *const th_vsw; +extern const function_base *const th_vlsb; +extern const function_base *const th_vlsh; +extern const function_base *const th_vlsw; +extern const function_base *const th_vlsbu; +extern const function_base *const th_vlshu; +extern const function_base *const th_vlswu; +extern const function_base *const th_vssb; +extern const function_base *const th_vssh; +extern const function_base *const th_vssw; +extern const function_base *const th_vlxb; +extern const function_base *const th_vlxh; +extern const function_base *const th_vlxw; +extern const function_base *const th_vlxbu; +extern const function_base *const th_vlxhu; +extern const function_base *const th_vlxwu; +extern const function_base *const th_vsxb; +extern const function_base *const th_vsxh; +extern const function_base *const th_vsxw; +extern const function_base *const th_vsuxb; +extern const function_base *const th_vsuxh; +extern const function_base *const th_vsuxw; +extern const function_base *const th_vext_x_v; +} + +} // end namespace riscv_vector + +#endif diff --git a/gcc/config/riscv/thead-vector.md b/gcc/config/riscv/thead-vector.md index ebbb25f8f2a..26107dae49b 100644 --- a/gcc/config/riscv/thead-vector.md +++ b/gcc/config/riscv/thead-vector.md @@ -1,7 +1,95 @@ (define_c_enum "unspec" [ + UNSPEC_TH_VLB + UNSPEC_TH_VLBU + UNSPEC_TH_VLH + UNSPEC_TH_VLHU + UNSPEC_TH_VLW + UNSPEC_TH_VLWU + + UNSPEC_TH_VLSB + UNSPEC_TH_VLSBU + UNSPEC_TH_VLSH + UNSPEC_TH_VLSHU + UNSPEC_TH_VLSW + UNSPEC_TH_VLSWU + + UNSPEC_TH_VLXB + UNSPEC_TH_VLXBU + UNSPEC_TH_VLXH + UNSPEC_TH_VLXHU + UNSPEC_TH_VLXW + UNSPEC_TH_VLXWU + + UNSPEC_TH_VSUXB + UNSPEC_TH_VSUXH + UNSPEC_TH_VSUXW + UNSPEC_TH_VWLDST ]) +(define_int_iterator UNSPEC_TH_VLMEM_OP [ + UNSPEC_TH_VLB UNSPEC_TH_VLBU + UNSPEC_TH_VLH UNSPEC_TH_VLHU + UNSPEC_TH_VLW UNSPEC_TH_VLWU +]) + +(define_int_iterator UNSPEC_TH_VLSMEM_OP [ + UNSPEC_TH_VLSB UNSPEC_TH_VLSBU + UNSPEC_TH_VLSH UNSPEC_TH_VLSHU + UNSPEC_TH_VLSW UNSPEC_TH_VLSWU +]) + +(define_int_iterator UNSPEC_TH_VLXMEM_OP [ + UNSPEC_TH_VLXB UNSPEC_TH_VLXBU + UNSPEC_TH_VLXH UNSPEC_TH_VLXHU + UNSPEC_TH_VLXW UNSPEC_TH_VLXWU +]) + +(define_int_attr vlmem_op_attr [ + (UNSPEC_TH_VLB "b") (UNSPEC_TH_VLBU "bu") + (UNSPEC_TH_VLH "h") (UNSPEC_TH_VLHU "hu") + (UNSPEC_TH_VLW "w") (UNSPEC_TH_VLWU "wu") + (UNSPEC_TH_VLSB "b") (UNSPEC_TH_VLSBU "bu") + (UNSPEC_TH_VLSH "h") (UNSPEC_TH_VLSHU "hu") + (UNSPEC_TH_VLSW "w") (UNSPEC_TH_VLSWU "wu") + (UNSPEC_TH_VLXB "b") (UNSPEC_TH_VLXBU "bu") + (UNSPEC_TH_VLXH "h") (UNSPEC_TH_VLXHU "hu") + (UNSPEC_TH_VLXW "w") (UNSPEC_TH_VLXWU "wu") + (UNSPEC_TH_VSUXB "b") + (UNSPEC_TH_VSUXH "h") + (UNSPEC_TH_VSUXW "w") +]) + +(define_int_attr vlmem_order_attr [ + (UNSPEC_TH_VLXB "") + (UNSPEC_TH_VLXH "") + (UNSPEC_TH_VLXW "") + (UNSPEC_TH_VSUXB "u") + (UNSPEC_TH_VSUXH "u") + (UNSPEC_TH_VSUXW "u") +]) + +(define_int_iterator UNSPEC_TH_VSMEM_OP [ + UNSPEC_TH_VLB + UNSPEC_TH_VLH + UNSPEC_TH_VLW +]) + +(define_int_iterator UNSPEC_TH_VSSMEM_OP [ + UNSPEC_TH_VLSB + UNSPEC_TH_VLSH + UNSPEC_TH_VLSW +]) + +(define_int_iterator UNSPEC_TH_VSXMEM_OP [ + UNSPEC_TH_VLXB + UNSPEC_TH_VLXH + UNSPEC_TH_VLXW + UNSPEC_TH_VSUXB + UNSPEC_TH_VSUXH + UNSPEC_TH_VSUXW +]) + (define_mode_iterator V_VLS_VT [V VLS VT]) (define_mode_iterator V_VB_VLS_VT [V VB VLS VT]) @@ -117,4 +205,169 @@ [(set_attr "type" "vsetvl") (set_attr "mode" "") (set (attr "sew") (symbol_ref "INTVAL (operands[1])")) - (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))]) \ No newline at end of file + (set (attr "vlmul") (symbol_ref "INTVAL (operands[2])"))]) + +(define_expand "@pred_mov_width" + [(set (match_operand:V_VLS 0 "nonimmediate_operand") + (if_then_else:V_VLS + (unspec: + [(match_operand: 1 "vector_mask_operand") + (match_operand 4 "vector_length_operand") + (match_operand 5 "const_int_operand") + (match_operand 6 "const_int_operand") + (match_operand 7 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLMEM_OP) + (match_operand:V_VLS 3 "vector_move_operand") + (match_operand:V_VLS 2 "vector_merge_operand")))] + "TARGET_XTHEADVECTOR" + {}) + +(define_insn_and_split "*pred_mov_width" + [(set (match_operand:V_VLS 0 "nonimmediate_operand" "=vr, vr, vd, m, vr, vr") + (if_then_else:V_VLS + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1, Wc1, vm, vmWc1, Wc1, Wc1") + (match_operand 4 "vector_length_operand" " rK, rK, rK, rK, rK, rK") + (match_operand 5 "const_int_operand" " i, i, i, i, i, i") + (match_operand 6 "const_int_operand" " i, i, i, i, i, i") + (match_operand 7 "const_int_operand" " i, i, i, i, i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLMEM_OP) + (match_operand:V_VLS 3 "reg_or_mem_operand" " m, m, m, vr, vr, vr") + (match_operand:V_VLS 2 "vector_merge_operand" " 0, vu, vu, vu, vu, 0")))] + "(TARGET_XTHEADVECTOR + && (register_operand (operands[0], mode) + || register_operand (operands[3], mode)))" + "@ + vl.v\t%0,%3%p1 + vl.v\t%0,%3 + vl.v\t%0,%3,%1.t + vs.v\t%3,%0%p1 + vmv.v.v\t%0,%3 + vmv.v.v\t%0,%3" + "&& register_operand (operands[0], mode) + && register_operand (operands[3], mode) + && satisfies_constraint_vu (operands[2]) + && INTVAL (operands[7]) == riscv_vector::VLMAX" + [(set (match_dup 0) (match_dup 3))] + "" + [(set_attr "type" "vlde,vlde,vlde,vste,vimov,vimov") + (set_attr "mode" "")]) + +(define_insn "@pred_store_width" + [(set (match_operand:VI 0 "memory_operand" "+m") + (if_then_else:VI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1") + (match_operand 3 "vector_length_operand" " rK") + (match_operand 4 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VSMEM_OP) + (match_operand:VI 2 "register_operand" " vr") + (match_dup 0)))] + "TARGET_XTHEADVECTOR" + "vs.v\t%2,%0%p1" + [(set_attr "type" "vste") + (set_attr "mode" "") + (set (attr "avl_type_idx") (const_int 4)) + (set_attr "vl_op_idx" "3")]) + +(define_insn "@pred_strided_load_width" + [(set (match_operand:VI 0 "register_operand" "=vr, vr, vd") + (if_then_else:VI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1, Wc1, vm") + (match_operand 5 "vector_length_operand" " rK, rK, rK") + (match_operand 6 "const_int_operand" " i, i, i") + (match_operand 7 "const_int_operand" " i, i, i") + (match_operand 8 "const_int_operand" " i, i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLSMEM_OP) + (unspec:VI + [(match_operand:VI 3 "memory_operand" " m, m, m") + (match_operand 4 "pmode_reg_or_0_operand" " rJ, rJ, rJ")] UNSPEC_TH_VLSMEM_OP) + (match_operand:VI 2 "vector_merge_operand" " 0, vu, vu")))] + "TARGET_XTHEADVECTOR" + "vls.v\t%0,%3,%z4%p1" + [(set_attr "type" "vlds") + (set_attr "mode" "")]) + +(define_insn "@pred_strided_store_width" + [(set (match_operand:VI 0 "memory_operand" "+m") + (if_then_else:VI + (unspec: + [(match_operand: 1 "vector_mask_operand" "vmWc1") + (match_operand 4 "vector_length_operand" " rK") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VSSMEM_OP) + (unspec:VI + [(match_operand 2 "pmode_reg_or_0_operand" " rJ") + (match_operand:VI 3 "register_operand" " vr")] UNSPEC_TH_VSSMEM_OP) + (match_dup 0)))] + "TARGET_XTHEADVECTOR" + "vss.v\t%3,%0,%z2%p1" + [(set_attr "type" "vsts") + (set_attr "mode" "") + (set (attr "avl_type_idx") (const_int 5))]) + +(define_insn "@pred_indexed_load_width" + [(set (match_operand:VI 0 "register_operand" "=vd, vr,vd, vr") + (if_then_else:VI + (unspec: + [(match_operand: 1 "vector_mask_operand" " vm,Wc1,vm,Wc1") + (match_operand 5 "vector_length_operand" " rK, rK,rK, rK") + (match_operand 6 "const_int_operand" " i, i, i, i") + (match_operand 7 "const_int_operand" " i, i, i, i") + (match_operand 8 "const_int_operand" " i, i, i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VLXMEM_OP) + (unspec:VI + [(match_operand 3 "pmode_reg_or_0_operand" " rJ, rJ,rJ, rJ") + (mem:BLK (scratch)) + (match_operand:VI 4 "register_operand" " vr, vr,vr, vr")] UNSPEC_TH_VLXMEM_OP) + (match_operand:VI 2 "vector_merge_operand" " vu, vu, 0, 0")))] + "TARGET_XTHEADVECTOR" + "vlx.v\t%0,(%z3),%4%p1" + [(set_attr "type" "vldux") + (set_attr "mode" "")]) + +(define_insn "@pred_indexed_store_width" + [(set (mem:BLK (scratch)) + (unspec:BLK + [(unspec: + [(match_operand: 0 "vector_mask_operand" "vmWc1") + (match_operand 4 "vector_length_operand" " rK") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_TH_VSXMEM_OP) + (match_operand 1 "pmode_reg_or_0_operand" " rJ") + (match_operand:VI 2 "register_operand" " vr") + (match_operand:VI 3 "register_operand" " vr")] UNSPEC_TH_VSXMEM_OP))] + "TARGET_XTHEADVECTOR" + "vsx.v\t%3,(%z1),%2%p0" + [(set_attr "type" "vstux") + (set_attr "mode" "")]) + +(define_expand "@pred_th_extract" + [(set (match_operand: 0 "register_operand") + (unspec: + [(vec_select: + (match_operand:V_VLSI 1 "register_operand") + (parallel [(match_operand:DI 2 "register_operand" "r")])) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))] + "TARGET_XTHEADVECTOR" +{}) + +(define_insn "*pred_th_extract" + [(set (match_operand: 0 "register_operand" "=r") + (unspec: + [(vec_select: + (match_operand:V_VLSI 1 "register_operand" "vr") + (parallel [(match_operand:DI 2 "register_operand" "r")])) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE))] + "TARGET_XTHEADVECTOR" + "vext.x.v\t%0,%1,%2" + [(set_attr "type" "vimovvx") + (set_attr "mode" "")])