Message ID | 20231206024524.10792-3-wangfeng@eswincomputing.com |
---|---|
State | New |
Headers | show |
Series | [1/4] RISC-V: Add crypto vector implied ISA info. | expand |
Do vector crypto instruction demand RATIO ?
If no, add them into:
;; It is valid for instruction that require sew/lmul ratio.
(define_attr "ratio" ""
(cond [(eq_attr "type" "vimov,vfmov,vldux,vldox,vstux,vstox,\
vialu,vshift,vicmp,vimul,vidiv,vsalu,\
vext,viwalu,viwmul,vicalu,vnshift,\
vimuladd,vimerge,vaalu,vsmul,vsshift,\
vnclip,viminmax,viwmuladd,vmffs,vmsfs,\
vmiota,vmidx,vfalu,vfmul,vfminmax,vfdiv,\
vfwalu,vfwmul,vfsqrt,vfrecp,vfsgnj,vfcmp,\
vfmerge,vfcvtitof,vfcvtftoi,vfwcvtitof,\
vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,\
vfncvtftof,vfmuladd,vfwmuladd,vfclass,vired,\
viwred,vfredu,vfredo,vfwredu,vfwredo,vimovvx,\
vimovxv,vfmovvf,vfmovfv,vslideup,vslidedown,\
vislide1up,vislide1down,vfslide1up,vfslide1down,\
vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox")
(const_int INVALID_ATTRIBUTE)
+(define_insn "@pred_vandn<mode>"
+ [(set (match_operand:VI 0 "register_operand" "=vd,vd")
Seems all vector crypto instructions are not allowed to use v0 ? Why not use vr?
+ (set_attr "mode" "<VWEXTI:MODE>")])
use <MODE> is enough.
+(define_insn "@pred_vwsll<mode>_scalar"
+ [(set (match_operand:VWEXTI 0 "register_operand" "=&vd")
+ (if_then_else:VWEXTI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (ashift:VWEXTI
+ (zero_extend:VWEXTI
+ (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "vr"))
+ (match_operand:<VSUBEL> 4 "pmode_reg_or_uimm5_operand" "rK"))
+ (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))]
+ "TARGET_ZVBB"
+ "vwsll.v%o4\t%0,%3,%4%p1"
+ [(set_attr "type" "vwsll")
+ (set_attr "mode" "<VWEXTI:MODE>")])
Seems that we can leverage EEW widen overlap ?
See RVV ISA:
;; According to RVV ISA:
;; The destination EEW is greater than the source EEW, the source EMUL is at least 1,
;; and the overlap is in the highest-numbered part of the destination register group
;; (e.g., when LMUL=8, vzext.vf4 v0, v6 is legal, but a source of v0, v2, or v4 is not).
;; So the source operand should have LMUL >= 1.
Reference patch: https://gcc.gnu.org/pipermail/gcc-patches/2023-December/638869.html
Currently, I don't have a solution to support highest-number overlap for vv instruction.
Keep them early clobber for now it ok.
juzhe.zhong@rivai.ai
From: Feng Wang
Date: 2023-12-06 10:45
To: gcc-patches
CC: kito.cheng; jeffreyalaw; juzhe.zhong; zhusonghe; panciyan; Feng Wang
Subject: [PATCH 3/4] RISC-V: Add crypto vector machine descriptions
This patch add the crypto machine descriptions(vector-crypto.md) and
some new iterators which are used by crypto vector ext.
Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com>
Co-Authored by: Ciyan Pan <panciyan@eswincomputing.com>
gcc/ChangeLog:
* config/riscv/iterators.md: Add rotate insn name.
* config/riscv/riscv.md: Add new insns name for crypto vector.
* config/riscv/vector-iterators.md: Add new iterators for crypto vector.
* config/riscv/vector.md: Add the corresponding attr for crypto vector.
* config/riscv/vector-crypto.md: New file.The machine descriptions for crypto vector.
---
gcc/config/riscv/iterators.md | 4 +-
gcc/config/riscv/riscv.md | 33 +-
gcc/config/riscv/vector-crypto.md | 500 +++++++++++++++++++++++++++
gcc/config/riscv/vector-iterators.md | 41 +++
gcc/config/riscv/vector.md | 49 ++-
5 files changed, 607 insertions(+), 20 deletions(-)
create mode 100755 gcc/config/riscv/vector-crypto.md
diff --git a/gcc/config/riscv/iterators.md b/gcc/config/riscv/iterators.md
index ecf033f2fa7..f332fba7031 100644
--- a/gcc/config/riscv/iterators.md
+++ b/gcc/config/riscv/iterators.md
@@ -304,7 +304,9 @@
(umax "maxu")
(clz "clz")
(ctz "ctz")
- (popcount "cpop")])
+ (popcount "cpop")
+ (rotate "rol")
+ (rotatert "ror")])
;; -------------------------------------------------------------------
;; Int Iterators.
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index 935eeb7fd8e..a887f3cd412 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -428,6 +428,34 @@
;; vcompress vector compress instruction
;; vmov whole vector register move
;; vector unknown vector instruction
+;; 17. Crypto Vector instructions
+;; vandn crypto vector bitwise and-not instructions
+;; vbrev crypto vector reverse bits in elements instructions
+;; vbrev8 crypto vector reverse bits in bytes instructions
+;; vrev8 crypto vector reverse bytes instructions
+;; vclz crypto vector count leading Zeros instructions
+;; vctz crypto vector count lrailing Zeros instructions
+;; vrol crypto vector rotate left instructions
+;; vror crypto vector rotate right instructions
+;; vwsll crypto vector widening shift left logical instructions
+;; vclmul crypto vector carry-less multiply - return low half instructions
+;; vclmulh crypto vector carry-less multiply - return high half instructions
+;; vghsh crypto vector add-multiply over GHASH Galois-Field instructions
+;; vgmul crypto vector multiply over GHASH Galois-Field instrumctions
+;; vaesef crypto vector AES final-round encryption instructions
+;; vaesem crypto vector AES middle-round encryption instructions
+;; vaesdf crypto vector AES final-round decryption instructions
+;; vaesdm crypto vector AES middle-round decryption instructions
+;; vaeskf1 crypto vector AES-128 Forward KeySchedule generation instructions
+;; vaeskf2 crypto vector AES-256 Forward KeySchedule generation instructions
+;; vaesz crypto vector AES round zero encryption/decryption instructions
+;; vsha2ms crypto vector SHA-2 message schedule instructions
+;; vsha2ch crypto vector SHA-2 two rounds of compression instructions
+;; vsha2cl crypto vector SHA-2 two rounds of compression instructions
+;; vsm4k crypto vector SM4 KeyExpansion instructions
+;; vsm4r crypto vector SM4 Rounds instructions
+;; vsm3me crypto vector SM3 Message Expansion instructions
+;; vsm3c crypto vector SM3 Compression instructions
(define_attr "type"
"unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore,
mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul,
@@ -447,7 +475,9 @@
vired,viwred,vfredu,vfredo,vfwredu,vfwredo,
vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,
- vgather,vcompress,vmov,vector"
+ vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vcpop,vrol,vror,vwsll,
+ vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaeskf1,vaeskf2,vaesz,
+ vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c"
(cond [(eq_attr "got" "load") (const_string "load")
;; If a doubleword move uses these expensive instructions,
@@ -3747,6 +3777,7 @@
(include "thead.md")
(include "generic-ooo.md")
(include "vector.md")
+(include "vector-crypto.md")
(include "zicond.md")
(include "zc.md")
(include "corev.md")
diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md
new file mode 100755
index 00000000000..a40ecef4342
--- /dev/null
+++ b/gcc/config/riscv/vector-crypto.md
@@ -0,0 +1,500 @@
+(define_c_enum "unspec" [
+ ;; Zvbb unspecs
+ UNSPEC_VBREV
+ UNSPEC_VBREV8
+ UNSPEC_VREV8
+ UNSPEC_VCLMUL
+ UNSPEC_VCLMULH
+ UNSPEC_VGHSH
+ UNSPEC_VGMUL
+ UNSPEC_VAESEF
+ UNSPEC_VAESEFVV
+ UNSPEC_VAESEFVS
+ UNSPEC_VAESEM
+ UNSPEC_VAESEMVV
+ UNSPEC_VAESEMVS
+ UNSPEC_VAESDF
+ UNSPEC_VAESDFVV
+ UNSPEC_VAESDFVS
+ UNSPEC_VAESDM
+ UNSPEC_VAESDMVV
+ UNSPEC_VAESDMVS
+ UNSPEC_VAESZ
+ UNSPEC_VAESZVVNULL
+ UNSPEC_VAESZVS
+ UNSPEC_VAESKF1
+ UNSPEC_VAESKF2
+ UNSPEC_VSHA2MS
+ UNSPEC_VSHA2CH
+ UNSPEC_VSHA2CL
+ UNSPEC_VSM4K
+ UNSPEC_VSM4R
+ UNSPEC_VSM4RVV
+ UNSPEC_VSM4RVS
+ UNSPEC_VSM3ME
+ UNSPEC_VSM3C
+])
+
+(define_int_attr rev [(UNSPEC_VBREV "brev") (UNSPEC_VBREV8 "brev8") (UNSPEC_VREV8 "rev8")])
+
+(define_int_attr h [(UNSPEC_VCLMUL "") (UNSPEC_VCLMULH "h")])
+
+(define_int_attr vv_ins_name [(UNSPEC_VGMUL "gmul" ) (UNSPEC_VAESEFVV "aesef")
+ (UNSPEC_VAESEMVV "aesem") (UNSPEC_VAESDFVV "aesdf")
+ (UNSPEC_VAESDMVV "aesdm") (UNSPEC_VAESEFVS "aesef")
+ (UNSPEC_VAESEMVS "aesem") (UNSPEC_VAESDFVS "aesdf")
+ (UNSPEC_VAESDMVS "aesdm") (UNSPEC_VAESZVS "aesz" )
+ (UNSPEC_VSM4RVV "sm4r" ) (UNSPEC_VSM4RVS "sm4r" )])
+
+(define_int_attr vv_ins1_name [(UNSPEC_VGHSH "ghsh") (UNSPEC_VSHA2MS "sha2ms")
+ (UNSPEC_VSHA2CH "sha2ch") (UNSPEC_VSHA2CL "sha2cl")])
+
+(define_int_attr vi_ins_name [(UNSPEC_VAESKF1 "aeskf1") (UNSPEC_VSM4K "sm4k")])
+
+(define_int_attr vi_ins1_name [(UNSPEC_VAESKF2 "aeskf2") (UNSPEC_VSM3C "sm3c")])
+
+(define_int_attr ins_type [(UNSPEC_VGMUL "vv") (UNSPEC_VAESEFVV "vv")
+ (UNSPEC_VAESEMVV "vv") (UNSPEC_VAESDFVV "vv")
+ (UNSPEC_VAESDMVV "vv") (UNSPEC_VAESEFVS "vs")
+ (UNSPEC_VAESEMVS "vs") (UNSPEC_VAESDFVS "vs")
+ (UNSPEC_VAESDMVS "vs") (UNSPEC_VAESZVS "vs")
+ (UNSPEC_VSM4RVV "vv") (UNSPEC_VSM4RVS "vs")])
+
+(define_int_iterator UNSPEC_VRBB8 [UNSPEC_VBREV UNSPEC_VBREV8 UNSPEC_VREV8])
+
+(define_int_iterator UNSPEC_CLMUL [UNSPEC_VCLMUL UNSPEC_VCLMULH])
+
+(define_int_iterator UNSPEC_CRYPTO_VV [UNSPEC_VGMUL UNSPEC_VAESEFVV UNSPEC_VAESEMVV
+ UNSPEC_VAESDFVV UNSPEC_VAESDMVV UNSPEC_VAESEFVS
+ UNSPEC_VAESEMVS UNSPEC_VAESDFVS UNSPEC_VAESDMVS
+ UNSPEC_VAESZVS UNSPEC_VSM4RVV UNSPEC_VSM4RVS])
+
+(define_int_iterator UNSPEC_VGNHAB [UNSPEC_VGHSH UNSPEC_VSHA2MS UNSPEC_VSHA2CH UNSPEC_VSHA2CL])
+
+(define_int_iterator UNSPEC_CRYPTO_VI [UNSPEC_VAESKF1 UNSPEC_VSM4K])
+
+(define_int_iterator UNSPEC_CRYPTO_VI1 [UNSPEC_VAESKF2 UNSPEC_VSM3C])
+
+;; zvbb instructions patterns.
+;; vandn.vv vandn.vx vrol.vv vrol.vx
+;; vror.vv vror.vx vror.vi
+;; vwsll.vv vwsll.vx vwsll.vi
+
+(define_insn "@pred_vandn<mode>"
+ [(set (match_operand:VI 0 "register_operand" "=vd,vd")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" "rK,rK")
+ (match_operand 6 "const_int_operand" "i, i")
+ (match_operand 7 "const_int_operand" "i, i")
+ (match_operand 8 "const_int_operand" "i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (and:VI
+ (match_operand:VI 3 "register_operand" "vr,vr")
+ (not:VI (match_operand:VI 4 "register_operand" "vr,vr")))
+ (match_operand:VI 2 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "vandn.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "vandn")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_vandn<mode>_scalar"
+ [(set (match_operand:VI 0 "register_operand" "=vd,vd")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" "rK,rK")
+ (match_operand 6 "const_int_operand" "i, i")
+ (match_operand 7 "const_int_operand" "i, i")
+ (match_operand 8 "const_int_operand" "i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (and:VI
+ (match_operand:VI 3 "register_operand" "vr,vr")
+ (not:<VEL>
+ (match_operand:<VEL> 4 "register_operand" "r,r")))
+ (match_operand:VI 2 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "vandn.vx\t%0,%3,%4%p1"
+ [(set_attr "type" "vandn")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_v<bitmanip_optab><mode>"
+ [(set (match_operand:VI 0 "register_operand" "=vd,vd")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" "rK,rK")
+ (match_operand 6 "const_int_operand" "i, i")
+ (match_operand 7 "const_int_operand" "i, i")
+ (match_operand 8 "const_int_operand" "i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (bitmanip_rotate:VI
+ (match_operand:VI 3 "register_operand" "vr,vr")
+ (match_operand:VI 4 "register_operand" "vr,vr"))
+ (match_operand:VI 2 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "v<bitmanip_insn>.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "v<bitmanip_insn>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_v<bitmanip_optab><mode>_scalar"
+ [(set (match_operand:VI 0 "register_operand" "=vd, vd")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, vmWc1")
+ (match_operand 5 "vector_length_operand" "rK, rK")
+ (match_operand 6 "const_int_operand" "i, i")
+ (match_operand 7 "const_int_operand" "i, i")
+ (match_operand 8 "const_int_operand" "i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (bitmanip_rotate:VI
+ (match_operand:VI 3 "register_operand" "vr, vr")
+ (match_operand 4 "pmode_register_operand" "r, r"))
+ (match_operand:VI 2 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "v<bitmanip_insn>.vx\t%0,%3,%4%p1"
+ [(set_attr "type" "v<bitmanip_insn>")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "*pred_vror<mode>_scalar"
+ [(set (match_operand:VI 0 "register_operand" "=vd, vd")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, vmWc1")
+ (match_operand 5 "vector_length_operand" "rK, rK")
+ (match_operand 6 "const_int_operand" "i, i")
+ (match_operand 7 "const_int_operand" "i, i")
+ (match_operand 8 "const_int_operand" "i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (rotatert:VI
+ (match_operand:VI 3 "register_operand" "vr, vr")
+ (match_operand 4 "const_csr_operand" "K, K"))
+ (match_operand:VI 2 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "vror.vi\t%0,%3,%4%p1"
+ [(set_attr "type" "vror")
+ (set_attr "mode" "<MODE>")])
+
+(define_insn "@pred_vwsll<mode>"
+ [(set (match_operand:VWEXTI 0 "register_operand" "=&vd")
+ (if_then_else:VWEXTI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (ashift:VWEXTI
+ (zero_extend:VWEXTI
+ (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "vr"))
+ (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" "vr"))
+ (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))]
+ "TARGET_ZVBB"
+ "vwsll.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "vwsll")
+ (set_attr "mode" "<VWEXTI:MODE>")])
+
+(define_insn "@pred_vwsll<mode>_scalar"
+ [(set (match_operand:VWEXTI 0 "register_operand" "=&vd")
+ (if_then_else:VWEXTI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 5 "vector_length_operand" " rK")
+ (match_operand 6 "const_int_operand" " i")
+ (match_operand 7 "const_int_operand" " i")
+ (match_operand 8 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (ashift:VWEXTI
+ (zero_extend:VWEXTI
+ (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "vr"))
+ (match_operand:<VSUBEL> 4 "pmode_reg_or_uimm5_operand" "rK"))
+ (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))]
+ "TARGET_ZVBB"
+ "vwsll.v%o4\t%0,%3,%4%p1"
+ [(set_attr "type" "vwsll")
+ (set_attr "mode" "<VWEXTI:MODE>")])
+
+;; vbrev.v vbrev8.v vrev8.v
+
+(define_insn "@pred_v<rev><mode>"
+ [(set (match_operand:VI 0 "register_operand" "=vd,vd")
+ (if_then_else:VI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 4 "vector_length_operand" "rK,rK")
+ (match_operand 5 "const_int_operand" "i, i")
+ (match_operand 6 "const_int_operand" "i, i")
+ (match_operand 7 "const_int_operand" "i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VI
+ [(match_operand:VI 3 "register_operand" "vr,vr")]UNSPEC_VRBB8)
+ (match_operand:VI 2 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVBB || TARGET_ZVKB"
+ "v<rev>.v\t%0,%3%p1"
+ [(set_attr "type" "v<rev>")
+ (set_attr "mode" "<MODE>")])
+
+;; vclz.v vctz.v
+
+(define_insn "@pred_v<bitmanip_optab><mode>"
+ [(set (match_operand:VI 0 "register_operand" "=vd")
+ (clz_ctz_pcnt:VI
+ (parallel
+ [(match_operand:VI 2 "register_operand" "vr")
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1")
+ (match_operand 3 "vector_length_operand" " rK")
+ (match_operand 4 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)])))]
+ "TARGET_ZVBB"
+ "v<bitmanip_insn>.v\t%0,%2%p1"
+ [(set_attr "type" "v<bitmanip_insn>")
+ (set_attr "mode" "<MODE>")])
+
+;; zvbc instructions patterns.
+;; vclmul.vv vclmul.vx
+;; vclmulh.vv vclmulh.vx
+
+(define_insn "@pred_vclmul<h><mode>"
+ [(set (match_operand:VDI 0 "register_operand" "=vd,vd")
+ (if_then_else:VDI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" "rK,rK")
+ (match_operand 6 "const_int_operand" "i, i")
+ (match_operand 7 "const_int_operand" "i, i")
+ (match_operand 8 "const_int_operand" "i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VDI
+ [(match_operand:VDI 3 "register_operand" "vr,vr")
+ (match_operand:VDI 4 "register_operand" "vr,vr")]UNSPEC_CLMUL)
+ (match_operand:VDI 2 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVBC && TARGET_64BIT"
+ "vclmul<h>.vv\t%0,%3,%4%p1"
+ [(set_attr "type" "vclmul<h>")
+ (set_attr "mode" "<VDI:MODE>")])
+
+(define_insn "@pred_vclmul<h><mode>_scalar"
+ [(set (match_operand:VDI 0 "register_operand" "=vd,vd")
+ (if_then_else:VDI
+ (unspec:<VM>
+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1")
+ (match_operand 5 "vector_length_operand" "rK,rK")
+ (match_operand 6 "const_int_operand" "i, i")
+ (match_operand 7 "const_int_operand" "i, i")
+ (match_operand 8 "const_int_operand" "i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VDI
+ [(match_operand:VDI 3 "register_operand" "vr,vr")
+ (match_operand:<VDI:VEL> 4 "register_operand" "r,r")]UNSPEC_CLMUL)
+ (match_operand:VDI 2 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVBC && TARGET_64BIT"
+ "vclmul<h>.vx\t%0,%3,%4%p1"
+ [(set_attr "type" "vclmul<h>")
+ (set_attr "mode" "<VDI:MODE>")])
+
+;; zvknh[ab] and zvkg instructions patterns.
+;; vsha2ms.vv vsha2ch.vv vsha2cl.vv vghsh.vv
+
+(define_insn "@pred_v<vv_ins1_name><mode>"
+ [(set (match_operand:VQEXTI 0 "register_operand" "=vd")
+ (if_then_else:VQEXTI
+ (unspec:<VQEXTI:VM>
+ [(match_operand 4 "vector_length_operand" "rK")
+ (match_operand 5 "const_int_operand" " i")
+ (match_operand 6 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VQEXTI
+ [(match_operand:VQEXTI 1 "register_operand" " 0")
+ (match_operand:VQEXTI 2 "register_operand" "vr")
+ (match_operand:VQEXTI 3 "register_operand" "vr")] UNSPEC_VGNHAB)
+ (match_dup 1)))]
+ "TARGET_ZVKNHA || TARGET_ZVKNHB || TARGET_ZVKG"
+ "v<vv_ins1_name>.vv\t%0,%2,%3"
+ [(set_attr "type" "v<vv_ins1_name>")
+ (set_attr "mode" "<VQEXTI:MODE>")])
+
+;; zvkned instructions patterns.
+;; vgmul.vv vaesz.vs
+;; vaesef.[vv,vs] vaesem.[vv,vs] vaesdf.[vv,vs] vaesdm.[vv,vs]
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type><mode>"
+ [(set (match_operand:VSI 0 "register_operand" "=vd")
+ (if_then_else:VSI
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" "rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 1 "register_operand" " 0")
+ (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKG || TARGET_ZVKNED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<VSI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x1<mode>_scalar"
+ [(set (match_operand:VSI 0 "register_operand" "=vd")
+ (if_then_else:VSI
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" "rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 1 "register_operand" " 0")
+ (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<VSI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x2<mode>_scalar"
+ [(set (match_operand:<VSIX2> 0 "register_operand" "=vd")
+ (if_then_else:<VSIX2>
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" "rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VSIX2>
+ [(match_operand:<VSIX2> 1 "register_operand" " 0")
+ (match_operand:VLMULX2_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<VLMULX2_SI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x4<mode>_scalar"
+ [(set (match_operand:<VSIX4> 0 "register_operand" "=vd")
+ (if_then_else:<VSIX4>
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" "rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VSIX4>
+ [(match_operand:<VSIX4> 1 "register_operand" " 0")
+ (match_operand:VLMULX4_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<VLMULX4_SI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x8<mode>_scalar"
+ [(set (match_operand:<VSIX8> 0 "register_operand" "=vd")
+ (if_then_else:<VSIX8>
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" "rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VSIX8>
+ [(match_operand:<VSIX8> 1 "register_operand" " 0")
+ (match_operand:VLMULX8_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<VLMULX8_SI:MODE>")])
+
+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x16<mode>_scalar"
+ [(set (match_operand:<VSIX16> 0 "register_operand" "=vd")
+ (if_then_else:<VSIX16>
+ (unspec:<VM>
+ [(match_operand 3 "vector_length_operand" "rK")
+ (match_operand 4 "const_int_operand" " i")
+ (match_operand 5 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:<VSIX16>
+ [(match_operand:<VSIX16> 1 "register_operand" " 0")
+ (match_operand:VLMULX16_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vv_ins_name>.<ins_type>\t%0,%2"
+ [(set_attr "type" "v<vv_ins_name>")
+ (set_attr "mode" "<VLMULX16_SI:MODE>")])
+
+;; vaeskf1.vi vsm4k.vi
+(define_insn "@pred_crypto_vi<vi_ins_name><mode>_scalar"
+ [(set (match_operand:VSI 0 "register_operand" "=vd, vd")
+ (if_then_else:VSI
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" "rK, rK")
+ (match_operand 5 "const_int_operand" " i, i")
+ (match_operand 6 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 2 "register_operand" "vr, vr")
+ (match_operand:<VEL> 3 "const_int_operand" " i, i")] UNSPEC_CRYPTO_VI)
+ (match_operand:VSI 1 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVKNED || TARGET_ZVKSED"
+ "v<vi_ins_name>.vi\t%0,%2,%3"
+ [(set_attr "type" "v<vi_ins_name>")
+ (set_attr "mode" "<MODE>")])
+
+;; vaeskf2.vi vsm3c.vi
+(define_insn "@pred_vi<vi_ins1_name><mode>_nomaskedoff_scalar"
+ [(set (match_operand:VSI 0 "register_operand" "=vd")
+ (if_then_else:VSI
+ (unspec:<VSI:VM>
+ [(match_operand 4 "vector_length_operand" "rK")
+ (match_operand 5 "const_int_operand" " i")
+ (match_operand 6 "const_int_operand" " i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 1 "register_operand" "0")
+ (match_operand:VSI 2 "register_operand" "vr")
+ (match_operand:<VEL> 3 "const_int_operand" " i")] UNSPEC_CRYPTO_VI1)
+ (match_dup 1)))]
+ "TARGET_ZVKNED || TARGET_ZVKSH"
+ "v<vi_ins1_name>.vi\t%0,%2,%3"
+ [(set_attr "type" "v<vi_ins1_name>")
+ (set_attr "mode" "<MODE>")])
+
+;; zvksh instructions patterns.
+;; vsm3me.vv
+
+(define_insn "@pred_vsm3me<mode>"
+ [(set (match_operand:VSI 0 "register_operand" "=vd, vd")
+ (if_then_else:VSI
+ (unspec:<VM>
+ [(match_operand 4 "vector_length_operand" "rK, rK")
+ (match_operand 5 "const_int_operand" " i, i")
+ (match_operand 6 "const_int_operand" " i, i")
+ (reg:SI VL_REGNUM)
+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)
+ (unspec:VSI
+ [(match_operand:VSI 2 "register_operand" "vr, vr")
+ (match_operand:VSI 3 "register_operand" "vr, vr")] UNSPEC_VSM3ME)
+ (match_operand:VSI 1 "vector_merge_operand" "vu, 0")))]
+ "TARGET_ZVKSH"
+ "vsm3me.vv\t%0,%2,%3"
+ [(set_attr "type" "vsm3me")
+ (set_attr "mode" "<VSI:MODE>")])
\ No newline at end of file
diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md
index 56080ed1f5f..1b16b476035 100644
--- a/gcc/config/riscv/vector-iterators.md
+++ b/gcc/config/riscv/vector-iterators.md
@@ -3916,3 +3916,44 @@
(V1024BI "riscv_vector::vls_mode_valid_p (V1024BImode) && TARGET_MIN_VLEN >= 1024")
(V2048BI "riscv_vector::vls_mode_valid_p (V2048BImode) && TARGET_MIN_VLEN >= 2048")
(V4096BI "riscv_vector::vls_mode_valid_p (V4096BImode) && TARGET_MIN_VLEN >= 4096")])
+
+(define_mode_iterator VSI [
+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX2_SI [
+ RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX4_SI [
+ RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX8_SI [
+ RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_iterator VLMULX16_SI [
+ (RVVMF2SI "TARGET_MIN_VLEN > 32")
+])
+
+(define_mode_attr VSIX2 [
+ (RVVM8SI "RVVM8SI") (RVVM4SI "RVVM8SI") (RVVM2SI "RVVM4SI") (RVVM1SI "RVVM2SI") (RVVMF2SI "RVVM1SI")
+])
+
+(define_mode_attr VSIX4 [
+ (RVVM2SI "RVVM8SI") (RVVM1SI "RVVM4SI") (RVVMF2SI "RVVM2SI")
+])
+
+(define_mode_attr VSIX8 [
+ (RVVM1SI "RVVM8SI") (RVVMF2SI "RVVM4SI")
+])
+
+(define_mode_attr VSIX16 [
+ (RVVMF2SI "RVVM8SI")
+])
+
+(define_mode_iterator VDI [
+ (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64")
+ (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64")
+])
\ No newline at end of file
diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md
index ba9c9e5a9b6..36cb7510ec6 100644
--- a/gcc/config/riscv/vector.md
+++ b/gcc/config/riscv/vector.md
@@ -52,7 +52,9 @@
vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,\
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
- vssegtux,vssegtox,vlsegdff")
+ vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
+ vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
(const_string "true")]
(const_string "false")))
@@ -74,7 +76,9 @@
vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovxv,vfmovfv,\
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\
- vssegtux,vssegtox,vlsegdff")
+ vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\
+ vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
(const_string "true")]
(const_string "false")))
@@ -698,10 +702,12 @@
vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,\
vired,viwred,vfredu,vfredo,vfwredu,vfwredo,vimovxv,vfmovfv,\
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
- vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff")
+ vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff,\
+ vandn,vbrev,vbrev8,vrev8,vrol,vror,vwsll,vclmul,vclmulh")
(const_int 2)
- (eq_attr "type" "vimerge,vfmerge,vcompress")
+ (eq_attr "type" "vimerge,vfmerge,vcompress,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c")
(const_int 1)
(eq_attr "type" "vimuladd,vfmuladd")
@@ -740,7 +746,8 @@
vstox,vext,vmsfs,vmiota,vfsqrt,vfrecp,vfcvtitof,vldff,\
vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,\
vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,vcompress,\
- vlsegde,vssegts,vssegtux,vssegtox,vlsegdff")
+ vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbrev,vbrev8,vrev8,\
+ vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c")
(const_int 4)
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -755,13 +762,15 @@
vsshift,vnclip,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\
vfsgnj,vfmerge,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\
- vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox")
+ vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\
+ vror,vwsll,vclmul,vclmulh")
(const_int 5)
(eq_attr "type" "vicmp,vimuladd,vfcmp,vfmuladd")
(const_int 6)
- (eq_attr "type" "vmpop,vmffs,vmidx,vssegte")
+ (eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz,vgmul,vaesef,vaesem,vaesdf,vaesdm,\
+ vaesz,vsm4r")
(const_int 3)]
(const_int INVALID_ATTRIBUTE)))
@@ -770,7 +779,8 @@
(cond [(eq_attr "type" "vlde,vimov,vfmov,vext,vmiota,vfsqrt,vfrecp,\
vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,\
vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,\
- vcompress,vldff,vlsegde,vlsegdff")
+ vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8,vghsh,\
+ vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c")
(symbol_ref "riscv_vector::get_ta(operands[5])")
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -786,13 +796,13 @@
vfwalu,vfwmul,vfsgnj,vfmerge,vired,viwred,vfredu,\
vfredo,vfwredu,vfwredo,vslideup,vslidedown,vislide1up,\
vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\
- vlsegds,vlsegdux,vlsegdox")
+ vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll,vclmul,vclmulh")
(symbol_ref "riscv_vector::get_ta(operands[6])")
(eq_attr "type" "vimuladd,vfmuladd")
(symbol_ref "riscv_vector::get_ta(operands[7])")
- (eq_attr "type" "vmidx")
+ (eq_attr "type" "vmidx,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,vsm4r")
(symbol_ref "riscv_vector::get_ta(operands[4])")]
(const_int INVALID_ATTRIBUTE)))
@@ -800,7 +810,7 @@
(define_attr "ma" ""
(cond [(eq_attr "type" "vlde,vext,vmiota,vfsqrt,vfrecp,vfcvtitof,vfcvtftoi,\
vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,\
- vfncvtftof,vfclass,vldff,vlsegde,vlsegdff")
+ vfncvtftof,vfclass,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8")
(symbol_ref "riscv_vector::get_ma(operands[6])")
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -815,7 +825,8 @@
vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,\
vfwalu,vfwmul,vfsgnj,vfcmp,vslideup,vslidedown,\
vislide1up,vislide1down,vfslide1up,vfslide1down,vgather,\
- viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox")
+ viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\
+ vror,vwsll,vclmul,vclmulh")
(symbol_ref "riscv_vector::get_ma(operands[7])")
(eq_attr "type" "vimuladd,vfmuladd")
@@ -831,9 +842,10 @@
vfsqrt,vfrecp,vfmerge,vfcvtitof,vfcvtftoi,vfwcvtitof,\
vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,\
vfclass,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\
- vimovxv,vfmovfv,vlsegde,vlsegdff")
+ vimovxv,vfmovfv,vlsegde,vlsegdff,vbrev,vbrev8,vrev8")
(const_int 7)
- (eq_attr "type" "vldm,vstm,vmalu,vmalu")
+ (eq_attr "type" "vldm,vstm,vmalu,vmalu,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,\
+ vsm4r")
(const_int 5)
;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast.
@@ -848,18 +860,19 @@
vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\
vfsgnj,vfcmp,vslideup,vslidedown,vislide1up,\
vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\
- vlsegds,vlsegdux,vlsegdox")
+ vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll")
(const_int 8)
- (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox")
+ (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox,vclmul,vclmulh")
(const_int 5)
(eq_attr "type" "vimuladd,vfmuladd")
(const_int 9)
- (eq_attr "type" "vmsfs,vmidx,vcompress")
+ (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,\
+ vsm4k,vsm3me,vsm3c")
(const_int 6)
- (eq_attr "type" "vmpop,vmffs,vssegte")
+ (eq_attr "type" "vmpop,vmffs,vssegte,vclz,vctz")
(const_int 4)]
(const_int INVALID_ATTRIBUTE)))
2023-12-06 14:53 juzhe.zhong <juzhe.zhong@rivai.ai> wrote: >Do vector crypto instruction demand RATIO ? > >If no, add them into: > >;; It is valid for instruction that require sew/lmul ratio. >(define_attr "ratio" "" > (cond [(eq_attr "type" "vimov,vfmov,vldux,vldox,vstux,vstox,\ > vialu,vshift,vicmp,vimul,vidiv,vsalu,\ > vext,viwalu,viwmul,vicalu,vnshift,\ > vimuladd,vimerge,vaalu,vsmul,vsshift,\ > vnclip,viminmax,viwmuladd,vmffs,vmsfs,\ > vmiota,vmidx,vfalu,vfmul,vfminmax,vfdiv,\ > vfwalu,vfwmul,vfsqrt,vfrecp,vfsgnj,vfcmp,\ > vfmerge,vfcvtitof,vfcvtftoi,vfwcvtitof,\ > vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,\ > vfncvtftof,vfmuladd,vfwmuladd,vfclass,vired,\ > viwred,vfredu,vfredo,vfwredu,vfwredo,vimovvx,\ > vimovxv,vfmovvf,vfmovfv,vslideup,vslidedown,\ > vislide1up,vislide1down,vfslide1up,vfslide1down,\ > vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vssegtox") > (const_int INVALID_ATTRIBUTE) > Modified, thanks! > >+(define_insn "@pred_vandn<mode>" >+ [(set (match_operand:VI 0 "register_operand" "=vd,vd") > >Seems all vector crypto instructions are not allowed to use v0 ? Why not use vr? > >+ (set_attr "mode" "<VWEXTI:MODE>")]) >use <MODE> is enough. Done. > >+(define_insn "@pred_vwsll<mode>_scalar" >+ [(set (match_operand:VWEXTI 0 "register_operand" "=&vd") >+ (if_then_else:VWEXTI >+ (unspec:<VM> >+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1") >+ (match_operand 5 "vector_length_operand" " rK") >+ (match_operand 6 "const_int_operand" " i") >+ (match_operand 7 "const_int_operand" " i") >+ (match_operand 8 "const_int_operand" " i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (ashift:VWEXTI >+ (zero_extend:VWEXTI >+ (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "vr")) >+ (match_operand:<VSUBEL> 4 "pmode_reg_or_uimm5_operand" "rK")) >+ (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))] >+ "TARGET_ZVBB" >+ "vwsll.v%o4\t%0,%3,%4%p1" >+ [(set_attr "type" "vwsll") >+ (set_attr "mode" "<VWEXTI:MODE>")]) > >Seems that we can leverage EEW widen overlap ? > >See RVV ISA: > > ;; According to RVV ISA: > ;; The destination EEW is greater than the source EEW, the source EMUL is at least 1, > ;; and the overlap is in the highest-numbered part of the destination register group > ;; (e.g., when LMUL=8, vzext.vf4 v0, v6 is legal, but a source of v0, v2, or v4 is not). > ;; So the source operand should have LMUL >= 1. > >Reference patch: https://gcc.gnu.org/pipermail/gcc-patches/2023-December/638869.html > >Currently, I don't have a solution to support highest-number overlap for vv instruction. >Keep them early clobber for now it ok. > > > >juzhe.zhong@rivai.ai Will update this part after your patch merged. > >From: Feng Wang >Date: 2023-12-06 10:45 >To: gcc-patches >CC: kito.cheng; jeffreyalaw; juzhe.zhong; zhusonghe; panciyan; Feng Wang >Subject: [PATCH 3/4] RISC-V: Add crypto vector machine descriptions >This patch add the crypto machine descriptions(vector-crypto.md) and >some new iterators which are used by crypto vector ext. > >Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com> >Co-Authored by: Ciyan Pan <panciyan@eswincomputing.com> > >gcc/ChangeLog: > >* config/riscv/iterators.md: Add rotate insn name. >* config/riscv/riscv.md: Add new insns name for crypto vector. >* config/riscv/vector-iterators.md: Add new iterators for crypto vector. >* config/riscv/vector.md: Add the corresponding attr for crypto vector. >* config/riscv/vector-crypto.md: New file.The machine descriptions for crypto vector. >--- >gcc/config/riscv/iterators.md | 4 +- >gcc/config/riscv/riscv.md | 33 +- >gcc/config/riscv/vector-crypto.md | 500 +++++++++++++++++++++++++++ >gcc/config/riscv/vector-iterators.md | 41 +++ >gcc/config/riscv/vector.md | 49 ++- >5 files changed, 607 insertions(+), 20 deletions(-) >create mode 100755 gcc/config/riscv/vector-crypto.md > >diff --git a/gcc/config/riscv/iterators.md b/gcc/config/riscv/iterators.md >index ecf033f2fa7..f332fba7031 100644 >--- a/gcc/config/riscv/iterators.md >+++ b/gcc/config/riscv/iterators.md >@@ -304,7 +304,9 @@ >(umax "maxu") >(clz "clz") >(ctz "ctz") >- (popcount "cpop")]) >+ (popcount "cpop") >+ (rotate "rol") >+ (rotatert "ror")]) >;; ------------------------------------------------------------------- >;; Int Iterators. >diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md >index 935eeb7fd8e..a887f3cd412 100644 >--- a/gcc/config/riscv/riscv.md >+++ b/gcc/config/riscv/riscv.md >@@ -428,6 +428,34 @@ >;; vcompress vector compress instruction >;; vmov whole vector register move >;; vector unknown vector instruction >+;; 17. Crypto Vector instructions >+;; vandn crypto vector bitwise and-not instructions >+;; vbrev crypto vector reverse bits in elements instructions >+;; vbrev8 crypto vector reverse bits in bytes instructions >+;; vrev8 crypto vector reverse bytes instructions >+;; vclz crypto vector count leading Zeros instructions >+;; vctz crypto vector count lrailing Zeros instructions >+;; vrol crypto vector rotate left instructions >+;; vror crypto vector rotate right instructions >+;; vwsll crypto vector widening shift left logical instructions >+;; vclmul crypto vector carry-less multiply - return low half instructions >+;; vclmulh crypto vector carry-less multiply - return high half instructions >+;; vghsh crypto vector add-multiply over GHASH Galois-Field instructions >+;; vgmul crypto vector multiply over GHASH Galois-Field instrumctions >+;; vaesef crypto vector AES final-round encryption instructions >+;; vaesem crypto vector AES middle-round encryption instructions >+;; vaesdf crypto vector AES final-round decryption instructions >+;; vaesdm crypto vector AES middle-round decryption instructions >+;; vaeskf1 crypto vector AES-128 Forward KeySchedule generation instructions >+;; vaeskf2 crypto vector AES-256 Forward KeySchedule generation instructions >+;; vaesz crypto vector AES round zero encryption/decryption instructions >+;; vsha2ms crypto vector SHA-2 message schedule instructions >+;; vsha2ch crypto vector SHA-2 two rounds of compression instructions >+;; vsha2cl crypto vector SHA-2 two rounds of compression instructions >+;; vsm4k crypto vector SM4 KeyExpansion instructions >+;; vsm4r crypto vector SM4 Rounds instructions >+;; vsm3me crypto vector SM3 Message Expansion instructions >+;; vsm3c crypto vector SM3 Compression instructions >(define_attr "type" > "unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore, > mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul, >@@ -447,7 +475,9 @@ > vired,viwred,vfredu,vfredo,vfwredu,vfwredo, > vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv, > vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down, >- vgather,vcompress,vmov,vector" >+ vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vcpop,vrol,vror,vwsll, >+ vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaeskf1,vaeskf2,vaesz, >+ vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c" > (cond [(eq_attr "got" "load") (const_string "load") >;; If a doubleword move uses these expensive instructions, >@@ -3747,6 +3777,7 @@ >(include "thead.md") >(include "generic-ooo.md") >(include "vector.md") >+(include "vector-crypto.md") >(include "zicond.md") >(include "zc.md") >(include "corev.md") >diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md >new file mode 100755 >index 00000000000..a40ecef4342 >--- /dev/null >+++ b/gcc/config/riscv/vector-crypto.md >@@ -0,0 +1,500 @@ >+(define_c_enum "unspec" [ >+ ;; Zvbb unspecs >+ UNSPEC_VBREV >+ UNSPEC_VBREV8 >+ UNSPEC_VREV8 >+ UNSPEC_VCLMUL >+ UNSPEC_VCLMULH >+ UNSPEC_VGHSH >+ UNSPEC_VGMUL >+ UNSPEC_VAESEF >+ UNSPEC_VAESEFVV >+ UNSPEC_VAESEFVS >+ UNSPEC_VAESEM >+ UNSPEC_VAESEMVV >+ UNSPEC_VAESEMVS >+ UNSPEC_VAESDF >+ UNSPEC_VAESDFVV >+ UNSPEC_VAESDFVS >+ UNSPEC_VAESDM >+ UNSPEC_VAESDMVV >+ UNSPEC_VAESDMVS >+ UNSPEC_VAESZ >+ UNSPEC_VAESZVVNULL >+ UNSPEC_VAESZVS >+ UNSPEC_VAESKF1 >+ UNSPEC_VAESKF2 >+ UNSPEC_VSHA2MS >+ UNSPEC_VSHA2CH >+ UNSPEC_VSHA2CL >+ UNSPEC_VSM4K >+ UNSPEC_VSM4R >+ UNSPEC_VSM4RVV >+ UNSPEC_VSM4RVS >+ UNSPEC_VSM3ME >+ UNSPEC_VSM3C >+]) >+ >+(define_int_attr rev [(UNSPEC_VBREV "brev") (UNSPEC_VBREV8 "brev8") (UNSPEC_VREV8 "rev8")]) >+ >+(define_int_attr h [(UNSPEC_VCLMUL "") (UNSPEC_VCLMULH "h")]) >+ >+(define_int_attr vv_ins_name [(UNSPEC_VGMUL "gmul" ) (UNSPEC_VAESEFVV "aesef") >+ (UNSPEC_VAESEMVV "aesem") (UNSPEC_VAESDFVV "aesdf") >+ (UNSPEC_VAESDMVV "aesdm") (UNSPEC_VAESEFVS "aesef") >+ (UNSPEC_VAESEMVS "aesem") (UNSPEC_VAESDFVS "aesdf") >+ (UNSPEC_VAESDMVS "aesdm") (UNSPEC_VAESZVS "aesz" ) >+ (UNSPEC_VSM4RVV "sm4r" ) (UNSPEC_VSM4RVS "sm4r" )]) >+ >+(define_int_attr vv_ins1_name [(UNSPEC_VGHSH "ghsh") (UNSPEC_VSHA2MS "sha2ms") >+ (UNSPEC_VSHA2CH "sha2ch") (UNSPEC_VSHA2CL "sha2cl")]) >+ >+(define_int_attr vi_ins_name [(UNSPEC_VAESKF1 "aeskf1") (UNSPEC_VSM4K "sm4k")]) >+ >+(define_int_attr vi_ins1_name [(UNSPEC_VAESKF2 "aeskf2") (UNSPEC_VSM3C "sm3c")]) >+ >+(define_int_attr ins_type [(UNSPEC_VGMUL "vv") (UNSPEC_VAESEFVV "vv") >+ (UNSPEC_VAESEMVV "vv") (UNSPEC_VAESDFVV "vv") >+ (UNSPEC_VAESDMVV "vv") (UNSPEC_VAESEFVS "vs") >+ (UNSPEC_VAESEMVS "vs") (UNSPEC_VAESDFVS "vs") >+ (UNSPEC_VAESDMVS "vs") (UNSPEC_VAESZVS "vs") >+ (UNSPEC_VSM4RVV "vv") (UNSPEC_VSM4RVS "vs")]) >+ >+(define_int_iterator UNSPEC_VRBB8 [UNSPEC_VBREV UNSPEC_VBREV8 UNSPEC_VREV8]) >+ >+(define_int_iterator UNSPEC_CLMUL [UNSPEC_VCLMUL UNSPEC_VCLMULH]) >+ >+(define_int_iterator UNSPEC_CRYPTO_VV [UNSPEC_VGMUL UNSPEC_VAESEFVV UNSPEC_VAESEMVV >+ UNSPEC_VAESDFVV UNSPEC_VAESDMVV UNSPEC_VAESEFVS >+ UNSPEC_VAESEMVS UNSPEC_VAESDFVS UNSPEC_VAESDMVS >+ UNSPEC_VAESZVS UNSPEC_VSM4RVV UNSPEC_VSM4RVS]) >+ >+(define_int_iterator UNSPEC_VGNHAB [UNSPEC_VGHSH UNSPEC_VSHA2MS UNSPEC_VSHA2CH UNSPEC_VSHA2CL]) >+ >+(define_int_iterator UNSPEC_CRYPTO_VI [UNSPEC_VAESKF1 UNSPEC_VSM4K]) >+ >+(define_int_iterator UNSPEC_CRYPTO_VI1 [UNSPEC_VAESKF2 UNSPEC_VSM3C]) >+ >+;; zvbb instructions patterns. >+;; vandn.vv vandn.vx vrol.vv vrol.vx >+;; vror.vv vror.vx vror.vi >+;; vwsll.vv vwsll.vx vwsll.vi >+ >+(define_insn "@pred_vandn<mode>" >+ [(set (match_operand:VI 0 "register_operand" "=vd,vd") >+ (if_then_else:VI >+ (unspec:<VM> >+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1") >+ (match_operand 5 "vector_length_operand" "rK,rK") >+ (match_operand 6 "const_int_operand" "i, i") >+ (match_operand 7 "const_int_operand" "i, i") >+ (match_operand 8 "const_int_operand" "i, i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (and:VI >+ (match_operand:VI 3 "register_operand" "vr,vr") >+ (not:VI (match_operand:VI 4 "register_operand" "vr,vr"))) >+ (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] >+ "TARGET_ZVBB || TARGET_ZVKB" >+ "vandn.vv\t%0,%3,%4%p1" >+ [(set_attr "type" "vandn") >+ (set_attr "mode" "<MODE>")]) >+ >+(define_insn "@pred_vandn<mode>_scalar" >+ [(set (match_operand:VI 0 "register_operand" "=vd,vd") >+ (if_then_else:VI >+ (unspec:<VM> >+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1") >+ (match_operand 5 "vector_length_operand" "rK,rK") >+ (match_operand 6 "const_int_operand" "i, i") >+ (match_operand 7 "const_int_operand" "i, i") >+ (match_operand 8 "const_int_operand" "i, i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (and:VI >+ (match_operand:VI 3 "register_operand" "vr,vr") >+ (not:<VEL> >+ (match_operand:<VEL> 4 "register_operand" "r,r"))) >+ (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] >+ "TARGET_ZVBB || TARGET_ZVKB" >+ "vandn.vx\t%0,%3,%4%p1" >+ [(set_attr "type" "vandn") >+ (set_attr "mode" "<MODE>")]) >+ >+(define_insn "@pred_v<bitmanip_optab><mode>" >+ [(set (match_operand:VI 0 "register_operand" "=vd,vd") >+ (if_then_else:VI >+ (unspec:<VM> >+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1") >+ (match_operand 5 "vector_length_operand" "rK,rK") >+ (match_operand 6 "const_int_operand" "i, i") >+ (match_operand 7 "const_int_operand" "i, i") >+ (match_operand 8 "const_int_operand" "i, i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (bitmanip_rotate:VI >+ (match_operand:VI 3 "register_operand" "vr,vr") >+ (match_operand:VI 4 "register_operand" "vr,vr")) >+ (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] >+ "TARGET_ZVBB || TARGET_ZVKB" >+ "v<bitmanip_insn>.vv\t%0,%3,%4%p1" >+ [(set_attr "type" "v<bitmanip_insn>") >+ (set_attr "mode" "<MODE>")]) >+ >+(define_insn "@pred_v<bitmanip_optab><mode>_scalar" >+ [(set (match_operand:VI 0 "register_operand" "=vd, vd") >+ (if_then_else:VI >+ (unspec:<VM> >+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, vmWc1") >+ (match_operand 5 "vector_length_operand" "rK, rK") >+ (match_operand 6 "const_int_operand" "i, i") >+ (match_operand 7 "const_int_operand" "i, i") >+ (match_operand 8 "const_int_operand" "i, i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (bitmanip_rotate:VI >+ (match_operand:VI 3 "register_operand" "vr, vr") >+ (match_operand 4 "pmode_register_operand" "r, r")) >+ (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] >+ "TARGET_ZVBB || TARGET_ZVKB" >+ "v<bitmanip_insn>.vx\t%0,%3,%4%p1" >+ [(set_attr "type" "v<bitmanip_insn>") >+ (set_attr "mode" "<MODE>")]) >+ >+(define_insn "*pred_vror<mode>_scalar" >+ [(set (match_operand:VI 0 "register_operand" "=vd, vd") >+ (if_then_else:VI >+ (unspec:<VM> >+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, vmWc1") >+ (match_operand 5 "vector_length_operand" "rK, rK") >+ (match_operand 6 "const_int_operand" "i, i") >+ (match_operand 7 "const_int_operand" "i, i") >+ (match_operand 8 "const_int_operand" "i, i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (rotatert:VI >+ (match_operand:VI 3 "register_operand" "vr, vr") >+ (match_operand 4 "const_csr_operand" "K, K")) >+ (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] >+ "TARGET_ZVBB || TARGET_ZVKB" >+ "vror.vi\t%0,%3,%4%p1" >+ [(set_attr "type" "vror") >+ (set_attr "mode" "<MODE>")]) >+ >+(define_insn "@pred_vwsll<mode>" >+ [(set (match_operand:VWEXTI 0 "register_operand" "=&vd") >+ (if_then_else:VWEXTI >+ (unspec:<VM> >+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1") >+ (match_operand 5 "vector_length_operand" " rK") >+ (match_operand 6 "const_int_operand" " i") >+ (match_operand 7 "const_int_operand" " i") >+ (match_operand 8 "const_int_operand" " i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (ashift:VWEXTI >+ (zero_extend:VWEXTI >+ (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "vr")) >+ (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" "vr")) >+ (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))] >+ "TARGET_ZVBB" >+ "vwsll.vv\t%0,%3,%4%p1" >+ [(set_attr "type" "vwsll") >+ (set_attr "mode" "<VWEXTI:MODE>")]) >+ >+(define_insn "@pred_vwsll<mode>_scalar" >+ [(set (match_operand:VWEXTI 0 "register_operand" "=&vd") >+ (if_then_else:VWEXTI >+ (unspec:<VM> >+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1") >+ (match_operand 5 "vector_length_operand" " rK") >+ (match_operand 6 "const_int_operand" " i") >+ (match_operand 7 "const_int_operand" " i") >+ (match_operand 8 "const_int_operand" " i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (ashift:VWEXTI >+ (zero_extend:VWEXTI >+ (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "vr")) >+ (match_operand:<VSUBEL> 4 "pmode_reg_or_uimm5_operand" "rK")) >+ (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))] >+ "TARGET_ZVBB" >+ "vwsll.v%o4\t%0,%3,%4%p1" >+ [(set_attr "type" "vwsll") >+ (set_attr "mode" "<VWEXTI:MODE>")]) >+ >+;; vbrev.v vbrev8.v vrev8.v >+ >+(define_insn "@pred_v<rev><mode>" >+ [(set (match_operand:VI 0 "register_operand" "=vd,vd") >+ (if_then_else:VI >+ (unspec:<VM> >+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1") >+ (match_operand 4 "vector_length_operand" "rK,rK") >+ (match_operand 5 "const_int_operand" "i, i") >+ (match_operand 6 "const_int_operand" "i, i") >+ (match_operand 7 "const_int_operand" "i, i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (unspec:VI >+ [(match_operand:VI 3 "register_operand" "vr,vr")]UNSPEC_VRBB8) >+ (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] >+ "TARGET_ZVBB || TARGET_ZVKB" >+ "v<rev>.v\t%0,%3%p1" >+ [(set_attr "type" "v<rev>") >+ (set_attr "mode" "<MODE>")]) >+ >+;; vclz.v vctz.v >+ >+(define_insn "@pred_v<bitmanip_optab><mode>" >+ [(set (match_operand:VI 0 "register_operand" "=vd") >+ (clz_ctz_pcnt:VI >+ (parallel >+ [(match_operand:VI 2 "register_operand" "vr") >+ (unspec:<VM> >+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1") >+ (match_operand 3 "vector_length_operand" " rK") >+ (match_operand 4 "const_int_operand" " i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)])))] >+ "TARGET_ZVBB" >+ "v<bitmanip_insn>.v\t%0,%2%p1" >+ [(set_attr "type" "v<bitmanip_insn>") >+ (set_attr "mode" "<MODE>")]) >+ >+;; zvbc instructions patterns. >+;; vclmul.vv vclmul.vx >+;; vclmulh.vv vclmulh.vx >+ >+(define_insn "@pred_vclmul<h><mode>" >+ [(set (match_operand:VDI 0 "register_operand" "=vd,vd") >+ (if_then_else:VDI >+ (unspec:<VM> >+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1") >+ (match_operand 5 "vector_length_operand" "rK,rK") >+ (match_operand 6 "const_int_operand" "i, i") >+ (match_operand 7 "const_int_operand" "i, i") >+ (match_operand 8 "const_int_operand" "i, i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (unspec:VDI >+ [(match_operand:VDI 3 "register_operand" "vr,vr") >+ (match_operand:VDI 4 "register_operand" "vr,vr")]UNSPEC_CLMUL) >+ (match_operand:VDI 2 "vector_merge_operand" "vu, 0")))] >+ "TARGET_ZVBC && TARGET_64BIT" >+ "vclmul<h>.vv\t%0,%3,%4%p1" >+ [(set_attr "type" "vclmul<h>") >+ (set_attr "mode" "<VDI:MODE>")]) >+ >+(define_insn "@pred_vclmul<h><mode>_scalar" >+ [(set (match_operand:VDI 0 "register_operand" "=vd,vd") >+ (if_then_else:VDI >+ (unspec:<VM> >+ [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1") >+ (match_operand 5 "vector_length_operand" "rK,rK") >+ (match_operand 6 "const_int_operand" "i, i") >+ (match_operand 7 "const_int_operand" "i, i") >+ (match_operand 8 "const_int_operand" "i, i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (unspec:VDI >+ [(match_operand:VDI 3 "register_operand" "vr,vr") >+ (match_operand:<VDI:VEL> 4 "register_operand" "r,r")]UNSPEC_CLMUL) >+ (match_operand:VDI 2 "vector_merge_operand" "vu, 0")))] >+ "TARGET_ZVBC && TARGET_64BIT" >+ "vclmul<h>.vx\t%0,%3,%4%p1" >+ [(set_attr "type" "vclmul<h>") >+ (set_attr "mode" "<VDI:MODE>")]) >+ >+;; zvknh[ab] and zvkg instructions patterns. >+;; vsha2ms.vv vsha2ch.vv vsha2cl.vv vghsh.vv >+ >+(define_insn "@pred_v<vv_ins1_name><mode>" >+ [(set (match_operand:VQEXTI 0 "register_operand" "=vd") >+ (if_then_else:VQEXTI >+ (unspec:<VQEXTI:VM> >+ [(match_operand 4 "vector_length_operand" "rK") >+ (match_operand 5 "const_int_operand" " i") >+ (match_operand 6 "const_int_operand" " i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (unspec:VQEXTI >+ [(match_operand:VQEXTI 1 "register_operand" " 0") >+ (match_operand:VQEXTI 2 "register_operand" "vr") >+ (match_operand:VQEXTI 3 "register_operand" "vr")] UNSPEC_VGNHAB) >+ (match_dup 1)))] >+ "TARGET_ZVKNHA || TARGET_ZVKNHB || TARGET_ZVKG" >+ "v<vv_ins1_name>.vv\t%0,%2,%3" >+ [(set_attr "type" "v<vv_ins1_name>") >+ (set_attr "mode" "<VQEXTI:MODE>")]) >+ >+;; zvkned instructions patterns. >+;; vgmul.vv vaesz.vs >+;; vaesef.[vv,vs] vaesem.[vv,vs] vaesdf.[vv,vs] vaesdm.[vv,vs] >+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type><mode>" >+ [(set (match_operand:VSI 0 "register_operand" "=vd") >+ (if_then_else:VSI >+ (unspec:<VM> >+ [(match_operand 3 "vector_length_operand" "rK") >+ (match_operand 4 "const_int_operand" " i") >+ (match_operand 5 "const_int_operand" " i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (unspec:VSI >+ [(match_operand:VSI 1 "register_operand" " 0") >+ (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) >+ (match_dup 1)))] >+ "TARGET_ZVKG || TARGET_ZVKNED" >+ "v<vv_ins_name>.<ins_type>\t%0,%2" >+ [(set_attr "type" "v<vv_ins_name>") >+ (set_attr "mode" "<VSI:MODE>")]) >+ >+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x1<mode>_scalar" >+ [(set (match_operand:VSI 0 "register_operand" "=vd") >+ (if_then_else:VSI >+ (unspec:<VM> >+ [(match_operand 3 "vector_length_operand" "rK") >+ (match_operand 4 "const_int_operand" " i") >+ (match_operand 5 "const_int_operand" " i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (unspec:VSI >+ [(match_operand:VSI 1 "register_operand" " 0") >+ (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) >+ (match_dup 1)))] >+ "TARGET_ZVKNED || TARGET_ZVKSED" >+ "v<vv_ins_name>.<ins_type>\t%0,%2" >+ [(set_attr "type" "v<vv_ins_name>") >+ (set_attr "mode" "<VSI:MODE>")]) >+ >+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x2<mode>_scalar" >+ [(set (match_operand:<VSIX2> 0 "register_operand" "=vd") >+ (if_then_else:<VSIX2> >+ (unspec:<VM> >+ [(match_operand 3 "vector_length_operand" "rK") >+ (match_operand 4 "const_int_operand" " i") >+ (match_operand 5 "const_int_operand" " i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (unspec:<VSIX2> >+ [(match_operand:<VSIX2> 1 "register_operand" " 0") >+ (match_operand:VLMULX2_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) >+ (match_dup 1)))] >+ "TARGET_ZVKNED || TARGET_ZVKSED" >+ "v<vv_ins_name>.<ins_type>\t%0,%2" >+ [(set_attr "type" "v<vv_ins_name>") >+ (set_attr "mode" "<VLMULX2_SI:MODE>")]) >+ >+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x4<mode>_scalar" >+ [(set (match_operand:<VSIX4> 0 "register_operand" "=vd") >+ (if_then_else:<VSIX4> >+ (unspec:<VM> >+ [(match_operand 3 "vector_length_operand" "rK") >+ (match_operand 4 "const_int_operand" " i") >+ (match_operand 5 "const_int_operand" " i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (unspec:<VSIX4> >+ [(match_operand:<VSIX4> 1 "register_operand" " 0") >+ (match_operand:VLMULX4_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) >+ (match_dup 1)))] >+ "TARGET_ZVKNED || TARGET_ZVKSED" >+ "v<vv_ins_name>.<ins_type>\t%0,%2" >+ [(set_attr "type" "v<vv_ins_name>") >+ (set_attr "mode" "<VLMULX4_SI:MODE>")]) >+ >+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x8<mode>_scalar" >+ [(set (match_operand:<VSIX8> 0 "register_operand" "=vd") >+ (if_then_else:<VSIX8> >+ (unspec:<VM> >+ [(match_operand 3 "vector_length_operand" "rK") >+ (match_operand 4 "const_int_operand" " i") >+ (match_operand 5 "const_int_operand" " i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (unspec:<VSIX8> >+ [(match_operand:<VSIX8> 1 "register_operand" " 0") >+ (match_operand:VLMULX8_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) >+ (match_dup 1)))] >+ "TARGET_ZVKNED || TARGET_ZVKSED" >+ "v<vv_ins_name>.<ins_type>\t%0,%2" >+ [(set_attr "type" "v<vv_ins_name>") >+ (set_attr "mode" "<VLMULX8_SI:MODE>")]) >+ >+(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x16<mode>_scalar" >+ [(set (match_operand:<VSIX16> 0 "register_operand" "=vd") >+ (if_then_else:<VSIX16> >+ (unspec:<VM> >+ [(match_operand 3 "vector_length_operand" "rK") >+ (match_operand 4 "const_int_operand" " i") >+ (match_operand 5 "const_int_operand" " i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (unspec:<VSIX16> >+ [(match_operand:<VSIX16> 1 "register_operand" " 0") >+ (match_operand:VLMULX16_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) >+ (match_dup 1)))] >+ "TARGET_ZVKNED || TARGET_ZVKSED" >+ "v<vv_ins_name>.<ins_type>\t%0,%2" >+ [(set_attr "type" "v<vv_ins_name>") >+ (set_attr "mode" "<VLMULX16_SI:MODE>")]) >+ >+;; vaeskf1.vi vsm4k.vi >+(define_insn "@pred_crypto_vi<vi_ins_name><mode>_scalar" >+ [(set (match_operand:VSI 0 "register_operand" "=vd, vd") >+ (if_then_else:VSI >+ (unspec:<VM> >+ [(match_operand 4 "vector_length_operand" "rK, rK") >+ (match_operand 5 "const_int_operand" " i, i") >+ (match_operand 6 "const_int_operand" " i, i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (unspec:VSI >+ [(match_operand:VSI 2 "register_operand" "vr, vr") >+ (match_operand:<VEL> 3 "const_int_operand" " i, i")] UNSPEC_CRYPTO_VI) >+ (match_operand:VSI 1 "vector_merge_operand" "vu, 0")))] >+ "TARGET_ZVKNED || TARGET_ZVKSED" >+ "v<vi_ins_name>.vi\t%0,%2,%3" >+ [(set_attr "type" "v<vi_ins_name>") >+ (set_attr "mode" "<MODE>")]) >+ >+;; vaeskf2.vi vsm3c.vi >+(define_insn "@pred_vi<vi_ins1_name><mode>_nomaskedoff_scalar" >+ [(set (match_operand:VSI 0 "register_operand" "=vd") >+ (if_then_else:VSI >+ (unspec:<VSI:VM> >+ [(match_operand 4 "vector_length_operand" "rK") >+ (match_operand 5 "const_int_operand" " i") >+ (match_operand 6 "const_int_operand" " i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (unspec:VSI >+ [(match_operand:VSI 1 "register_operand" "0") >+ (match_operand:VSI 2 "register_operand" "vr") >+ (match_operand:<VEL> 3 "const_int_operand" " i")] UNSPEC_CRYPTO_VI1) >+ (match_dup 1)))] >+ "TARGET_ZVKNED || TARGET_ZVKSH" >+ "v<vi_ins1_name>.vi\t%0,%2,%3" >+ [(set_attr "type" "v<vi_ins1_name>") >+ (set_attr "mode" "<MODE>")]) >+ >+;; zvksh instructions patterns. >+;; vsm3me.vv >+ >+(define_insn "@pred_vsm3me<mode>" >+ [(set (match_operand:VSI 0 "register_operand" "=vd, vd") >+ (if_then_else:VSI >+ (unspec:<VM> >+ [(match_operand 4 "vector_length_operand" "rK, rK") >+ (match_operand 5 "const_int_operand" " i, i") >+ (match_operand 6 "const_int_operand" " i, i") >+ (reg:SI VL_REGNUM) >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) >+ (unspec:VSI >+ [(match_operand:VSI 2 "register_operand" "vr, vr") >+ (match_operand:VSI 3 "register_operand" "vr, vr")] UNSPEC_VSM3ME) >+ (match_operand:VSI 1 "vector_merge_operand" "vu, 0")))] >+ "TARGET_ZVKSH" >+ "vsm3me.vv\t%0,%2,%3" >+ [(set_attr "type" "vsm3me") >+ (set_attr "mode" "<VSI:MODE>")]) >\ No newline at end of file >diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md >index 56080ed1f5f..1b16b476035 100644 >--- a/gcc/config/riscv/vector-iterators.md >+++ b/gcc/config/riscv/vector-iterators.md >@@ -3916,3 +3916,44 @@ > (V1024BI "riscv_vector::vls_mode_valid_p (V1024BImode) && TARGET_MIN_VLEN >= 1024") > (V2048BI "riscv_vector::vls_mode_valid_p (V2048BImode) && TARGET_MIN_VLEN >= 2048") > (V4096BI "riscv_vector::vls_mode_valid_p (V4096BImode) && TARGET_MIN_VLEN >= 4096")]) >+ >+(define_mode_iterator VSI [ >+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") >+]) >+ >+(define_mode_iterator VLMULX2_SI [ >+ RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") >+]) >+ >+(define_mode_iterator VLMULX4_SI [ >+ RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") >+]) >+ >+(define_mode_iterator VLMULX8_SI [ >+ RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") >+]) >+ >+(define_mode_iterator VLMULX16_SI [ >+ (RVVMF2SI "TARGET_MIN_VLEN > 32") >+]) >+ >+(define_mode_attr VSIX2 [ >+ (RVVM8SI "RVVM8SI") (RVVM4SI "RVVM8SI") (RVVM2SI "RVVM4SI") (RVVM1SI "RVVM2SI") (RVVMF2SI "RVVM1SI") >+]) >+ >+(define_mode_attr VSIX4 [ >+ (RVVM2SI "RVVM8SI") (RVVM1SI "RVVM4SI") (RVVMF2SI "RVVM2SI") >+]) >+ >+(define_mode_attr VSIX8 [ >+ (RVVM1SI "RVVM8SI") (RVVMF2SI "RVVM4SI") >+]) >+ >+(define_mode_attr VSIX16 [ >+ (RVVMF2SI "RVVM8SI") >+]) >+ >+(define_mode_iterator VDI [ >+ (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64") >+ (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64") >+]) >\ No newline at end of file >diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md >index ba9c9e5a9b6..36cb7510ec6 100644 >--- a/gcc/config/riscv/vector.md >+++ b/gcc/config/riscv/vector.md >@@ -52,7 +52,9 @@ > vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,\ > vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\ > vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\ >- vssegtux,vssegtox,vlsegdff") >+ vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\ >+ vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\ >+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c") >(const_string "true")] >(const_string "false"))) >@@ -74,7 +76,9 @@ > vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovxv,vfmovfv,\ > vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\ > vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\ >- vssegtux,vssegtox,vlsegdff") >+ vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\ >+ vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\ >+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c") >(const_string "true")] >(const_string "false"))) >@@ -698,10 +702,12 @@ >vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,\ >vired,viwred,vfredu,vfredo,vfwredu,vfwredo,vimovxv,vfmovfv,\ >vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\ >- vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff") >+ vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff,\ >+ vandn,vbrev,vbrev8,vrev8,vrol,vror,vwsll,vclmul,vclmulh") > (const_int 2) >- (eq_attr "type" "vimerge,vfmerge,vcompress") >+ (eq_attr "type" "vimerge,vfmerge,vcompress,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\ >+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c") > (const_int 1) > (eq_attr "type" "vimuladd,vfmuladd") >@@ -740,7 +746,8 @@ > vstox,vext,vmsfs,vmiota,vfsqrt,vfrecp,vfcvtitof,vldff,\ > vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,\ > vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,vcompress,\ >- vlsegde,vssegts,vssegtux,vssegtox,vlsegdff") >+ vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbrev,vbrev8,vrev8,\ >+ vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c") > (const_int 4) >;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast. >@@ -755,13 +762,15 @@ > vsshift,vnclip,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\ > vfsgnj,vfmerge,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\ > vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\ >- vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox") >+ vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\ >+ vror,vwsll,vclmul,vclmulh") > (const_int 5) >(eq_attr "type" "vicmp,vimuladd,vfcmp,vfmuladd") > (const_int 6) >- (eq_attr "type" "vmpop,vmffs,vmidx,vssegte") >+ (eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz,vgmul,vaesef,vaesem,vaesdf,vaesdm,\ >+ vaesz,vsm4r") > (const_int 3)] > (const_int INVALID_ATTRIBUTE))) >@@ -770,7 +779,8 @@ > (cond [(eq_attr "type" "vlde,vimov,vfmov,vext,vmiota,vfsqrt,vfrecp,\ > vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,\ > vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,\ >- vcompress,vldff,vlsegde,vlsegdff") >+ vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8,vghsh,\ >+ vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c") > (symbol_ref "riscv_vector::get_ta(operands[5])") >;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast. >@@ -786,13 +796,13 @@ > vfwalu,vfwmul,vfsgnj,vfmerge,vired,viwred,vfredu,\ > vfredo,vfwredu,vfwredo,vslideup,vslidedown,vislide1up,\ > vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\ >- vlsegds,vlsegdux,vlsegdox") >+ vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll,vclmul,vclmulh") > (symbol_ref "riscv_vector::get_ta(operands[6])") >(eq_attr "type" "vimuladd,vfmuladd") > (symbol_ref "riscv_vector::get_ta(operands[7])") >- (eq_attr "type" "vmidx") >+ (eq_attr "type" "vmidx,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,vsm4r") > (symbol_ref "riscv_vector::get_ta(operands[4])")] >(const_int INVALID_ATTRIBUTE))) >@@ -800,7 +810,7 @@ >(define_attr "ma" "" > (cond [(eq_attr "type" "vlde,vext,vmiota,vfsqrt,vfrecp,vfcvtitof,vfcvtftoi,\ > vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,\ >- vfncvtftof,vfclass,vldff,vlsegde,vlsegdff") >+ vfncvtftof,vfclass,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8") > (symbol_ref "riscv_vector::get_ma(operands[6])") >;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast. >@@ -815,7 +825,8 @@ > vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,\ > vfwalu,vfwmul,vfsgnj,vfcmp,vslideup,vslidedown,\ > vislide1up,vislide1down,vfslide1up,vfslide1down,vgather,\ >- viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox") >+ viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\ >+ vror,vwsll,vclmul,vclmulh") > (symbol_ref "riscv_vector::get_ma(operands[7])") >(eq_attr "type" "vimuladd,vfmuladd") >@@ -831,9 +842,10 @@ > vfsqrt,vfrecp,vfmerge,vfcvtitof,vfcvtftoi,vfwcvtitof,\ > vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,\ > vfclass,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\ >- vimovxv,vfmovfv,vlsegde,vlsegdff") >+ vimovxv,vfmovfv,vlsegde,vlsegdff,vbrev,vbrev8,vrev8") > (const_int 7) >- (eq_attr "type" "vldm,vstm,vmalu,vmalu") >+ (eq_attr "type" "vldm,vstm,vmalu,vmalu,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,\ >+ vsm4r") > (const_int 5) >;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast. >@@ -848,18 +860,19 @@ > vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\ > vfsgnj,vfcmp,vslideup,vslidedown,vislide1up,\ > vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\ >- vlsegds,vlsegdux,vlsegdox") >+ vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll") > (const_int 8) >- (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox") >+ (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox,vclmul,vclmulh") > (const_int 5) >(eq_attr "type" "vimuladd,vfmuladd") > (const_int 9) >- (eq_attr "type" "vmsfs,vmidx,vcompress") >+ (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,\ >+ vsm4k,vsm3me,vsm3c") > (const_int 6) >- (eq_attr "type" "vmpop,vmffs,vssegte") >+ (eq_attr "type" "vmpop,vmffs,vssegte,vclz,vctz") > (const_int 4)] >(const_int INVALID_ATTRIBUTE))) >-- >2.17.1 > >
diff --git a/gcc/config/riscv/iterators.md b/gcc/config/riscv/iterators.md index ecf033f2fa7..f332fba7031 100644 --- a/gcc/config/riscv/iterators.md +++ b/gcc/config/riscv/iterators.md @@ -304,7 +304,9 @@ (umax "maxu") (clz "clz") (ctz "ctz") - (popcount "cpop")]) + (popcount "cpop") + (rotate "rol") + (rotatert "ror")]) ;; ------------------------------------------------------------------- ;; Int Iterators. diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md index 935eeb7fd8e..a887f3cd412 100644 --- a/gcc/config/riscv/riscv.md +++ b/gcc/config/riscv/riscv.md @@ -428,6 +428,34 @@ ;; vcompress vector compress instruction ;; vmov whole vector register move ;; vector unknown vector instruction +;; 17. Crypto Vector instructions +;; vandn crypto vector bitwise and-not instructions +;; vbrev crypto vector reverse bits in elements instructions +;; vbrev8 crypto vector reverse bits in bytes instructions +;; vrev8 crypto vector reverse bytes instructions +;; vclz crypto vector count leading Zeros instructions +;; vctz crypto vector count lrailing Zeros instructions +;; vrol crypto vector rotate left instructions +;; vror crypto vector rotate right instructions +;; vwsll crypto vector widening shift left logical instructions +;; vclmul crypto vector carry-less multiply - return low half instructions +;; vclmulh crypto vector carry-less multiply - return high half instructions +;; vghsh crypto vector add-multiply over GHASH Galois-Field instructions +;; vgmul crypto vector multiply over GHASH Galois-Field instrumctions +;; vaesef crypto vector AES final-round encryption instructions +;; vaesem crypto vector AES middle-round encryption instructions +;; vaesdf crypto vector AES final-round decryption instructions +;; vaesdm crypto vector AES middle-round decryption instructions +;; vaeskf1 crypto vector AES-128 Forward KeySchedule generation instructions +;; vaeskf2 crypto vector AES-256 Forward KeySchedule generation instructions +;; vaesz crypto vector AES round zero encryption/decryption instructions +;; vsha2ms crypto vector SHA-2 message schedule instructions +;; vsha2ch crypto vector SHA-2 two rounds of compression instructions +;; vsha2cl crypto vector SHA-2 two rounds of compression instructions +;; vsm4k crypto vector SM4 KeyExpansion instructions +;; vsm4r crypto vector SM4 Rounds instructions +;; vsm3me crypto vector SM3 Message Expansion instructions +;; vsm3c crypto vector SM3 Compression instructions (define_attr "type" "unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore, mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul, @@ -447,7 +475,9 @@ vired,viwred,vfredu,vfredo,vfwredu,vfwredo, vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv, vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down, - vgather,vcompress,vmov,vector" + vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vcpop,vrol,vror,vwsll, + vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaeskf1,vaeskf2,vaesz, + vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c" (cond [(eq_attr "got" "load") (const_string "load") ;; If a doubleword move uses these expensive instructions, @@ -3747,6 +3777,7 @@ (include "thead.md") (include "generic-ooo.md") (include "vector.md") +(include "vector-crypto.md") (include "zicond.md") (include "zc.md") (include "corev.md") diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector-crypto.md new file mode 100755 index 00000000000..a40ecef4342 --- /dev/null +++ b/gcc/config/riscv/vector-crypto.md @@ -0,0 +1,500 @@ +(define_c_enum "unspec" [ + ;; Zvbb unspecs + UNSPEC_VBREV + UNSPEC_VBREV8 + UNSPEC_VREV8 + UNSPEC_VCLMUL + UNSPEC_VCLMULH + UNSPEC_VGHSH + UNSPEC_VGMUL + UNSPEC_VAESEF + UNSPEC_VAESEFVV + UNSPEC_VAESEFVS + UNSPEC_VAESEM + UNSPEC_VAESEMVV + UNSPEC_VAESEMVS + UNSPEC_VAESDF + UNSPEC_VAESDFVV + UNSPEC_VAESDFVS + UNSPEC_VAESDM + UNSPEC_VAESDMVV + UNSPEC_VAESDMVS + UNSPEC_VAESZ + UNSPEC_VAESZVVNULL + UNSPEC_VAESZVS + UNSPEC_VAESKF1 + UNSPEC_VAESKF2 + UNSPEC_VSHA2MS + UNSPEC_VSHA2CH + UNSPEC_VSHA2CL + UNSPEC_VSM4K + UNSPEC_VSM4R + UNSPEC_VSM4RVV + UNSPEC_VSM4RVS + UNSPEC_VSM3ME + UNSPEC_VSM3C +]) + +(define_int_attr rev [(UNSPEC_VBREV "brev") (UNSPEC_VBREV8 "brev8") (UNSPEC_VREV8 "rev8")]) + +(define_int_attr h [(UNSPEC_VCLMUL "") (UNSPEC_VCLMULH "h")]) + +(define_int_attr vv_ins_name [(UNSPEC_VGMUL "gmul" ) (UNSPEC_VAESEFVV "aesef") + (UNSPEC_VAESEMVV "aesem") (UNSPEC_VAESDFVV "aesdf") + (UNSPEC_VAESDMVV "aesdm") (UNSPEC_VAESEFVS "aesef") + (UNSPEC_VAESEMVS "aesem") (UNSPEC_VAESDFVS "aesdf") + (UNSPEC_VAESDMVS "aesdm") (UNSPEC_VAESZVS "aesz" ) + (UNSPEC_VSM4RVV "sm4r" ) (UNSPEC_VSM4RVS "sm4r" )]) + +(define_int_attr vv_ins1_name [(UNSPEC_VGHSH "ghsh") (UNSPEC_VSHA2MS "sha2ms") + (UNSPEC_VSHA2CH "sha2ch") (UNSPEC_VSHA2CL "sha2cl")]) + +(define_int_attr vi_ins_name [(UNSPEC_VAESKF1 "aeskf1") (UNSPEC_VSM4K "sm4k")]) + +(define_int_attr vi_ins1_name [(UNSPEC_VAESKF2 "aeskf2") (UNSPEC_VSM3C "sm3c")]) + +(define_int_attr ins_type [(UNSPEC_VGMUL "vv") (UNSPEC_VAESEFVV "vv") + (UNSPEC_VAESEMVV "vv") (UNSPEC_VAESDFVV "vv") + (UNSPEC_VAESDMVV "vv") (UNSPEC_VAESEFVS "vs") + (UNSPEC_VAESEMVS "vs") (UNSPEC_VAESDFVS "vs") + (UNSPEC_VAESDMVS "vs") (UNSPEC_VAESZVS "vs") + (UNSPEC_VSM4RVV "vv") (UNSPEC_VSM4RVS "vs")]) + +(define_int_iterator UNSPEC_VRBB8 [UNSPEC_VBREV UNSPEC_VBREV8 UNSPEC_VREV8]) + +(define_int_iterator UNSPEC_CLMUL [UNSPEC_VCLMUL UNSPEC_VCLMULH]) + +(define_int_iterator UNSPEC_CRYPTO_VV [UNSPEC_VGMUL UNSPEC_VAESEFVV UNSPEC_VAESEMVV + UNSPEC_VAESDFVV UNSPEC_VAESDMVV UNSPEC_VAESEFVS + UNSPEC_VAESEMVS UNSPEC_VAESDFVS UNSPEC_VAESDMVS + UNSPEC_VAESZVS UNSPEC_VSM4RVV UNSPEC_VSM4RVS]) + +(define_int_iterator UNSPEC_VGNHAB [UNSPEC_VGHSH UNSPEC_VSHA2MS UNSPEC_VSHA2CH UNSPEC_VSHA2CL]) + +(define_int_iterator UNSPEC_CRYPTO_VI [UNSPEC_VAESKF1 UNSPEC_VSM4K]) + +(define_int_iterator UNSPEC_CRYPTO_VI1 [UNSPEC_VAESKF2 UNSPEC_VSM3C]) + +;; zvbb instructions patterns. +;; vandn.vv vandn.vx vrol.vv vrol.vx +;; vror.vv vror.vx vror.vi +;; vwsll.vv vwsll.vx vwsll.vi + +(define_insn "@pred_vandn<mode>" + [(set (match_operand:VI 0 "register_operand" "=vd,vd") + (if_then_else:VI + (unspec:<VM> + [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1") + (match_operand 5 "vector_length_operand" "rK,rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (and:VI + (match_operand:VI 3 "register_operand" "vr,vr") + (not:VI (match_operand:VI 4 "register_operand" "vr,vr"))) + (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBB || TARGET_ZVKB" + "vandn.vv\t%0,%3,%4%p1" + [(set_attr "type" "vandn") + (set_attr "mode" "<MODE>")]) + +(define_insn "@pred_vandn<mode>_scalar" + [(set (match_operand:VI 0 "register_operand" "=vd,vd") + (if_then_else:VI + (unspec:<VM> + [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1") + (match_operand 5 "vector_length_operand" "rK,rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (and:VI + (match_operand:VI 3 "register_operand" "vr,vr") + (not:<VEL> + (match_operand:<VEL> 4 "register_operand" "r,r"))) + (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBB || TARGET_ZVKB" + "vandn.vx\t%0,%3,%4%p1" + [(set_attr "type" "vandn") + (set_attr "mode" "<MODE>")]) + +(define_insn "@pred_v<bitmanip_optab><mode>" + [(set (match_operand:VI 0 "register_operand" "=vd,vd") + (if_then_else:VI + (unspec:<VM> + [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1") + (match_operand 5 "vector_length_operand" "rK,rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (bitmanip_rotate:VI + (match_operand:VI 3 "register_operand" "vr,vr") + (match_operand:VI 4 "register_operand" "vr,vr")) + (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBB || TARGET_ZVKB" + "v<bitmanip_insn>.vv\t%0,%3,%4%p1" + [(set_attr "type" "v<bitmanip_insn>") + (set_attr "mode" "<MODE>")]) + +(define_insn "@pred_v<bitmanip_optab><mode>_scalar" + [(set (match_operand:VI 0 "register_operand" "=vd, vd") + (if_then_else:VI + (unspec:<VM> + [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, vmWc1") + (match_operand 5 "vector_length_operand" "rK, rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (bitmanip_rotate:VI + (match_operand:VI 3 "register_operand" "vr, vr") + (match_operand 4 "pmode_register_operand" "r, r")) + (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBB || TARGET_ZVKB" + "v<bitmanip_insn>.vx\t%0,%3,%4%p1" + [(set_attr "type" "v<bitmanip_insn>") + (set_attr "mode" "<MODE>")]) + +(define_insn "*pred_vror<mode>_scalar" + [(set (match_operand:VI 0 "register_operand" "=vd, vd") + (if_then_else:VI + (unspec:<VM> + [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1, vmWc1") + (match_operand 5 "vector_length_operand" "rK, rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (rotatert:VI + (match_operand:VI 3 "register_operand" "vr, vr") + (match_operand 4 "const_csr_operand" "K, K")) + (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBB || TARGET_ZVKB" + "vror.vi\t%0,%3,%4%p1" + [(set_attr "type" "vror") + (set_attr "mode" "<MODE>")]) + +(define_insn "@pred_vwsll<mode>" + [(set (match_operand:VWEXTI 0 "register_operand" "=&vd") + (if_then_else:VWEXTI + (unspec:<VM> + [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1") + (match_operand 5 "vector_length_operand" " rK") + (match_operand 6 "const_int_operand" " i") + (match_operand 7 "const_int_operand" " i") + (match_operand 8 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (ashift:VWEXTI + (zero_extend:VWEXTI + (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "vr")) + (match_operand:<V_DOUBLE_TRUNC> 4 "register_operand" "vr")) + (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))] + "TARGET_ZVBB" + "vwsll.vv\t%0,%3,%4%p1" + [(set_attr "type" "vwsll") + (set_attr "mode" "<VWEXTI:MODE>")]) + +(define_insn "@pred_vwsll<mode>_scalar" + [(set (match_operand:VWEXTI 0 "register_operand" "=&vd") + (if_then_else:VWEXTI + (unspec:<VM> + [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1") + (match_operand 5 "vector_length_operand" " rK") + (match_operand 6 "const_int_operand" " i") + (match_operand 7 "const_int_operand" " i") + (match_operand 8 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (ashift:VWEXTI + (zero_extend:VWEXTI + (match_operand:<V_DOUBLE_TRUNC> 3 "register_operand" "vr")) + (match_operand:<VSUBEL> 4 "pmode_reg_or_uimm5_operand" "rK")) + (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))] + "TARGET_ZVBB" + "vwsll.v%o4\t%0,%3,%4%p1" + [(set_attr "type" "vwsll") + (set_attr "mode" "<VWEXTI:MODE>")]) + +;; vbrev.v vbrev8.v vrev8.v + +(define_insn "@pred_v<rev><mode>" + [(set (match_operand:VI 0 "register_operand" "=vd,vd") + (if_then_else:VI + (unspec:<VM> + [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1") + (match_operand 4 "vector_length_operand" "rK,rK") + (match_operand 5 "const_int_operand" "i, i") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VI + [(match_operand:VI 3 "register_operand" "vr,vr")]UNSPEC_VRBB8) + (match_operand:VI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBB || TARGET_ZVKB" + "v<rev>.v\t%0,%3%p1" + [(set_attr "type" "v<rev>") + (set_attr "mode" "<MODE>")]) + +;; vclz.v vctz.v + +(define_insn "@pred_v<bitmanip_optab><mode>" + [(set (match_operand:VI 0 "register_operand" "=vd") + (clz_ctz_pcnt:VI + (parallel + [(match_operand:VI 2 "register_operand" "vr") + (unspec:<VM> + [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1") + (match_operand 3 "vector_length_operand" " rK") + (match_operand 4 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)])))] + "TARGET_ZVBB" + "v<bitmanip_insn>.v\t%0,%2%p1" + [(set_attr "type" "v<bitmanip_insn>") + (set_attr "mode" "<MODE>")]) + +;; zvbc instructions patterns. +;; vclmul.vv vclmul.vx +;; vclmulh.vv vclmulh.vx + +(define_insn "@pred_vclmul<h><mode>" + [(set (match_operand:VDI 0 "register_operand" "=vd,vd") + (if_then_else:VDI + (unspec:<VM> + [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1") + (match_operand 5 "vector_length_operand" "rK,rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VDI + [(match_operand:VDI 3 "register_operand" "vr,vr") + (match_operand:VDI 4 "register_operand" "vr,vr")]UNSPEC_CLMUL) + (match_operand:VDI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBC && TARGET_64BIT" + "vclmul<h>.vv\t%0,%3,%4%p1" + [(set_attr "type" "vclmul<h>") + (set_attr "mode" "<VDI:MODE>")]) + +(define_insn "@pred_vclmul<h><mode>_scalar" + [(set (match_operand:VDI 0 "register_operand" "=vd,vd") + (if_then_else:VDI + (unspec:<VM> + [(match_operand:<VM> 1 "vector_mask_operand" "vmWc1,vmWc1") + (match_operand 5 "vector_length_operand" "rK,rK") + (match_operand 6 "const_int_operand" "i, i") + (match_operand 7 "const_int_operand" "i, i") + (match_operand 8 "const_int_operand" "i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VDI + [(match_operand:VDI 3 "register_operand" "vr,vr") + (match_operand:<VDI:VEL> 4 "register_operand" "r,r")]UNSPEC_CLMUL) + (match_operand:VDI 2 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVBC && TARGET_64BIT" + "vclmul<h>.vx\t%0,%3,%4%p1" + [(set_attr "type" "vclmul<h>") + (set_attr "mode" "<VDI:MODE>")]) + +;; zvknh[ab] and zvkg instructions patterns. +;; vsha2ms.vv vsha2ch.vv vsha2cl.vv vghsh.vv + +(define_insn "@pred_v<vv_ins1_name><mode>" + [(set (match_operand:VQEXTI 0 "register_operand" "=vd") + (if_then_else:VQEXTI + (unspec:<VQEXTI:VM> + [(match_operand 4 "vector_length_operand" "rK") + (match_operand 5 "const_int_operand" " i") + (match_operand 6 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VQEXTI + [(match_operand:VQEXTI 1 "register_operand" " 0") + (match_operand:VQEXTI 2 "register_operand" "vr") + (match_operand:VQEXTI 3 "register_operand" "vr")] UNSPEC_VGNHAB) + (match_dup 1)))] + "TARGET_ZVKNHA || TARGET_ZVKNHB || TARGET_ZVKG" + "v<vv_ins1_name>.vv\t%0,%2,%3" + [(set_attr "type" "v<vv_ins1_name>") + (set_attr "mode" "<VQEXTI:MODE>")]) + +;; zvkned instructions patterns. +;; vgmul.vv vaesz.vs +;; vaesef.[vv,vs] vaesem.[vv,vs] vaesdf.[vv,vs] vaesdm.[vv,vs] +(define_insn "@pred_crypto_vv<vv_ins_name><ins_type><mode>" + [(set (match_operand:VSI 0 "register_operand" "=vd") + (if_then_else:VSI + (unspec:<VM> + [(match_operand 3 "vector_length_operand" "rK") + (match_operand 4 "const_int_operand" " i") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VSI + [(match_operand:VSI 1 "register_operand" " 0") + (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) + (match_dup 1)))] + "TARGET_ZVKG || TARGET_ZVKNED" + "v<vv_ins_name>.<ins_type>\t%0,%2" + [(set_attr "type" "v<vv_ins_name>") + (set_attr "mode" "<VSI:MODE>")]) + +(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x1<mode>_scalar" + [(set (match_operand:VSI 0 "register_operand" "=vd") + (if_then_else:VSI + (unspec:<VM> + [(match_operand 3 "vector_length_operand" "rK") + (match_operand 4 "const_int_operand" " i") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VSI + [(match_operand:VSI 1 "register_operand" " 0") + (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) + (match_dup 1)))] + "TARGET_ZVKNED || TARGET_ZVKSED" + "v<vv_ins_name>.<ins_type>\t%0,%2" + [(set_attr "type" "v<vv_ins_name>") + (set_attr "mode" "<VSI:MODE>")]) + +(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x2<mode>_scalar" + [(set (match_operand:<VSIX2> 0 "register_operand" "=vd") + (if_then_else:<VSIX2> + (unspec:<VM> + [(match_operand 3 "vector_length_operand" "rK") + (match_operand 4 "const_int_operand" " i") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:<VSIX2> + [(match_operand:<VSIX2> 1 "register_operand" " 0") + (match_operand:VLMULX2_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) + (match_dup 1)))] + "TARGET_ZVKNED || TARGET_ZVKSED" + "v<vv_ins_name>.<ins_type>\t%0,%2" + [(set_attr "type" "v<vv_ins_name>") + (set_attr "mode" "<VLMULX2_SI:MODE>")]) + +(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x4<mode>_scalar" + [(set (match_operand:<VSIX4> 0 "register_operand" "=vd") + (if_then_else:<VSIX4> + (unspec:<VM> + [(match_operand 3 "vector_length_operand" "rK") + (match_operand 4 "const_int_operand" " i") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:<VSIX4> + [(match_operand:<VSIX4> 1 "register_operand" " 0") + (match_operand:VLMULX4_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) + (match_dup 1)))] + "TARGET_ZVKNED || TARGET_ZVKSED" + "v<vv_ins_name>.<ins_type>\t%0,%2" + [(set_attr "type" "v<vv_ins_name>") + (set_attr "mode" "<VLMULX4_SI:MODE>")]) + +(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x8<mode>_scalar" + [(set (match_operand:<VSIX8> 0 "register_operand" "=vd") + (if_then_else:<VSIX8> + (unspec:<VM> + [(match_operand 3 "vector_length_operand" "rK") + (match_operand 4 "const_int_operand" " i") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:<VSIX8> + [(match_operand:<VSIX8> 1 "register_operand" " 0") + (match_operand:VLMULX8_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) + (match_dup 1)))] + "TARGET_ZVKNED || TARGET_ZVKSED" + "v<vv_ins_name>.<ins_type>\t%0,%2" + [(set_attr "type" "v<vv_ins_name>") + (set_attr "mode" "<VLMULX8_SI:MODE>")]) + +(define_insn "@pred_crypto_vv<vv_ins_name><ins_type>x16<mode>_scalar" + [(set (match_operand:<VSIX16> 0 "register_operand" "=vd") + (if_then_else:<VSIX16> + (unspec:<VM> + [(match_operand 3 "vector_length_operand" "rK") + (match_operand 4 "const_int_operand" " i") + (match_operand 5 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:<VSIX16> + [(match_operand:<VSIX16> 1 "register_operand" " 0") + (match_operand:VLMULX16_SI 2 "register_operand" "vr")] UNSPEC_CRYPTO_VV) + (match_dup 1)))] + "TARGET_ZVKNED || TARGET_ZVKSED" + "v<vv_ins_name>.<ins_type>\t%0,%2" + [(set_attr "type" "v<vv_ins_name>") + (set_attr "mode" "<VLMULX16_SI:MODE>")]) + +;; vaeskf1.vi vsm4k.vi +(define_insn "@pred_crypto_vi<vi_ins_name><mode>_scalar" + [(set (match_operand:VSI 0 "register_operand" "=vd, vd") + (if_then_else:VSI + (unspec:<VM> + [(match_operand 4 "vector_length_operand" "rK, rK") + (match_operand 5 "const_int_operand" " i, i") + (match_operand 6 "const_int_operand" " i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VSI + [(match_operand:VSI 2 "register_operand" "vr, vr") + (match_operand:<VEL> 3 "const_int_operand" " i, i")] UNSPEC_CRYPTO_VI) + (match_operand:VSI 1 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVKNED || TARGET_ZVKSED" + "v<vi_ins_name>.vi\t%0,%2,%3" + [(set_attr "type" "v<vi_ins_name>") + (set_attr "mode" "<MODE>")]) + +;; vaeskf2.vi vsm3c.vi +(define_insn "@pred_vi<vi_ins1_name><mode>_nomaskedoff_scalar" + [(set (match_operand:VSI 0 "register_operand" "=vd") + (if_then_else:VSI + (unspec:<VSI:VM> + [(match_operand 4 "vector_length_operand" "rK") + (match_operand 5 "const_int_operand" " i") + (match_operand 6 "const_int_operand" " i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VSI + [(match_operand:VSI 1 "register_operand" "0") + (match_operand:VSI 2 "register_operand" "vr") + (match_operand:<VEL> 3 "const_int_operand" " i")] UNSPEC_CRYPTO_VI1) + (match_dup 1)))] + "TARGET_ZVKNED || TARGET_ZVKSH" + "v<vi_ins1_name>.vi\t%0,%2,%3" + [(set_attr "type" "v<vi_ins1_name>") + (set_attr "mode" "<MODE>")]) + +;; zvksh instructions patterns. +;; vsm3me.vv + +(define_insn "@pred_vsm3me<mode>" + [(set (match_operand:VSI 0 "register_operand" "=vd, vd") + (if_then_else:VSI + (unspec:<VM> + [(match_operand 4 "vector_length_operand" "rK, rK") + (match_operand 5 "const_int_operand" " i, i") + (match_operand 6 "const_int_operand" " i, i") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) + (unspec:VSI + [(match_operand:VSI 2 "register_operand" "vr, vr") + (match_operand:VSI 3 "register_operand" "vr, vr")] UNSPEC_VSM3ME) + (match_operand:VSI 1 "vector_merge_operand" "vu, 0")))] + "TARGET_ZVKSH" + "vsm3me.vv\t%0,%2,%3" + [(set_attr "type" "vsm3me") + (set_attr "mode" "<VSI:MODE>")]) \ No newline at end of file diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md index 56080ed1f5f..1b16b476035 100644 --- a/gcc/config/riscv/vector-iterators.md +++ b/gcc/config/riscv/vector-iterators.md @@ -3916,3 +3916,44 @@ (V1024BI "riscv_vector::vls_mode_valid_p (V1024BImode) && TARGET_MIN_VLEN >= 1024") (V2048BI "riscv_vector::vls_mode_valid_p (V2048BImode) && TARGET_MIN_VLEN >= 2048") (V4096BI "riscv_vector::vls_mode_valid_p (V4096BImode) && TARGET_MIN_VLEN >= 4096")]) + +(define_mode_iterator VSI [ + RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") +]) + +(define_mode_iterator VLMULX2_SI [ + RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") +]) + +(define_mode_iterator VLMULX4_SI [ + RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") +]) + +(define_mode_iterator VLMULX8_SI [ + RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") +]) + +(define_mode_iterator VLMULX16_SI [ + (RVVMF2SI "TARGET_MIN_VLEN > 32") +]) + +(define_mode_attr VSIX2 [ + (RVVM8SI "RVVM8SI") (RVVM4SI "RVVM8SI") (RVVM2SI "RVVM4SI") (RVVM1SI "RVVM2SI") (RVVMF2SI "RVVM1SI") +]) + +(define_mode_attr VSIX4 [ + (RVVM2SI "RVVM8SI") (RVVM1SI "RVVM4SI") (RVVMF2SI "RVVM2SI") +]) + +(define_mode_attr VSIX8 [ + (RVVM1SI "RVVM8SI") (RVVMF2SI "RVVM4SI") +]) + +(define_mode_attr VSIX16 [ + (RVVMF2SI "RVVM8SI") +]) + +(define_mode_iterator VDI [ + (RVVM8DI "TARGET_VECTOR_ELEN_64") (RVVM4DI "TARGET_VECTOR_ELEN_64") + (RVVM2DI "TARGET_VECTOR_ELEN_64") (RVVM1DI "TARGET_VECTOR_ELEN_64") +]) \ No newline at end of file diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md index ba9c9e5a9b6..36cb7510ec6 100644 --- a/gcc/config/riscv/vector.md +++ b/gcc/config/riscv/vector.md @@ -52,7 +52,9 @@ vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv,\ vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\ vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\ - vssegtux,vssegtox,vlsegdff") + vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\ + vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\ + vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c") (const_string "true")] (const_string "false"))) @@ -74,7 +76,9 @@ vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovxv,vfmovfv,\ vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\ vgather,vcompress,vlsegde,vssegte,vlsegds,vssegts,vlsegdux,vlsegdox,\ - vssegtux,vssegtox,vlsegdff") + vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,\ + vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\ + vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c") (const_string "true")] (const_string "false"))) @@ -698,10 +702,12 @@ vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,\ vired,viwred,vfredu,vfredo,vfwredu,vfwredo,vimovxv,vfmovfv,\ vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\ - vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff") + vgather,vldff,viwmuladd,vfwmuladd,vlsegde,vlsegds,vlsegdux,vlsegdox,vlsegdff,\ + vandn,vbrev,vbrev8,vrev8,vrol,vror,vwsll,vclmul,vclmulh") (const_int 2) - (eq_attr "type" "vimerge,vfmerge,vcompress") + (eq_attr "type" "vimerge,vfmerge,vcompress,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,\ + vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c") (const_int 1) (eq_attr "type" "vimuladd,vfmuladd") @@ -740,7 +746,8 @@ vstox,vext,vmsfs,vmiota,vfsqrt,vfrecp,vfcvtitof,vldff,\ vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,\ vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,vcompress,\ - vlsegde,vssegts,vssegtux,vssegtox,vlsegdff") + vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbrev,vbrev8,vrev8,\ + vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c") (const_int 4) ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast. @@ -755,13 +762,15 @@ vsshift,vnclip,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\ vfsgnj,vfmerge,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\ vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down,\ - vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox") + vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\ + vror,vwsll,vclmul,vclmulh") (const_int 5) (eq_attr "type" "vicmp,vimuladd,vfcmp,vfmuladd") (const_int 6) - (eq_attr "type" "vmpop,vmffs,vmidx,vssegte") + (eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz,vgmul,vaesef,vaesem,vaesdf,vaesdm,\ + vaesz,vsm4r") (const_int 3)] (const_int INVALID_ATTRIBUTE))) @@ -770,7 +779,8 @@ (cond [(eq_attr "type" "vlde,vimov,vfmov,vext,vmiota,vfsqrt,vfrecp,\ vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,\ vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,\ - vcompress,vldff,vlsegde,vlsegdff") + vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8,vghsh,\ + vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm3me,vsm3c") (symbol_ref "riscv_vector::get_ta(operands[5])") ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast. @@ -786,13 +796,13 @@ vfwalu,vfwmul,vfsgnj,vfmerge,vired,viwred,vfredu,\ vfredo,vfwredu,vfwredo,vslideup,vslidedown,vislide1up,\ vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\ - vlsegds,vlsegdux,vlsegdox") + vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll,vclmul,vclmulh") (symbol_ref "riscv_vector::get_ta(operands[6])") (eq_attr "type" "vimuladd,vfmuladd") (symbol_ref "riscv_vector::get_ta(operands[7])") - (eq_attr "type" "vmidx") + (eq_attr "type" "vmidx,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,vsm4r") (symbol_ref "riscv_vector::get_ta(operands[4])")] (const_int INVALID_ATTRIBUTE))) @@ -800,7 +810,7 @@ (define_attr "ma" "" (cond [(eq_attr "type" "vlde,vext,vmiota,vfsqrt,vfrecp,vfcvtitof,vfcvtftoi,\ vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,\ - vfncvtftof,vfclass,vldff,vlsegde,vlsegdff") + vfncvtftof,vfclass,vldff,vlsegde,vlsegdff,vbrev,vbrev8,vrev8") (symbol_ref "riscv_vector::get_ma(operands[6])") ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast. @@ -815,7 +825,8 @@ vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,\ vfwalu,vfwmul,vfsgnj,vfcmp,vslideup,vslidedown,\ vislide1up,vislide1down,vfslide1up,vfslide1down,vgather,\ - viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox") + viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,vandn,vrol,\ + vror,vwsll,vclmul,vclmulh") (symbol_ref "riscv_vector::get_ma(operands[7])") (eq_attr "type" "vimuladd,vfmuladd") @@ -831,9 +842,10 @@ vfsqrt,vfrecp,vfmerge,vfcvtitof,vfcvtftoi,vfwcvtitof,\ vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfncvtftof,\ vfclass,vired,viwred,vfredu,vfredo,vfwredu,vfwredo,\ - vimovxv,vfmovfv,vlsegde,vlsegdff") + vimovxv,vfmovfv,vlsegde,vlsegdff,vbrev,vbrev8,vrev8") (const_int 7) - (eq_attr "type" "vldm,vstm,vmalu,vmalu") + (eq_attr "type" "vldm,vstm,vmalu,vmalu,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,\ + vsm4r") (const_int 5) ;; If operands[3] of "vlds" is not vector mode, it is pred_broadcast. @@ -848,18 +860,19 @@ vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,vfwalu,vfwmul,\ vfsgnj,vfcmp,vslideup,vslidedown,vislide1up,\ vislide1down,vfslide1up,vfslide1down,vgather,viwmuladd,vfwmuladd,\ - vlsegds,vlsegdux,vlsegdox") + vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll") (const_int 8) - (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox") + (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox,vclmul,vclmulh") (const_int 5) (eq_attr "type" "vimuladd,vfmuladd") (const_int 9) - (eq_attr "type" "vmsfs,vmidx,vcompress") + (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,\ + vsm4k,vsm3me,vsm3c") (const_int 6) - (eq_attr "type" "vmpop,vmffs,vssegte") + (eq_attr "type" "vmpop,vmffs,vssegte,vclz,vctz") (const_int 4)] (const_int INVALID_ATTRIBUTE)))