Message ID | 20210929161920.GA26634@arm.com |
---|---|
State | New |
Headers | show |
Series | AArch64 Optimize truncation, shifts and bitmask comparisons | expand |
Hi Tamar, > -----Original Message----- > From: Tamar Christina <Tamar.Christina@arm.com> > Sent: Wednesday, September 29, 2021 5:19 PM > To: gcc-patches@gcc.gnu.org > Cc: nd <nd@arm.com>; Richard Earnshaw <Richard.Earnshaw@arm.com>; > Marcus Shawcroft <Marcus.Shawcroft@arm.com>; Kyrylo Tkachov > <Kyrylo.Tkachov@arm.com>; Richard Sandiford > <Richard.Sandiford@arm.com> > Subject: [PATCH 1/7]AArch64 Add combine patterns for right shift and > narrow > > Hi All, > > This adds a simple pattern for combining right shifts and narrows into > shifted narrows. > > i.e. > > typedef short int16_t; > typedef unsigned short uint16_t; > > void foo (uint16_t * restrict a, int16_t * restrict d, int n) > { > for( int i = 0; i < n; i++ ) > d[i] = (a[i] * a[i]) >> 10; > } > > now generates: > > .L4: > ldr q0, [x0, x3] > umull v1.4s, v0.4h, v0.4h > umull2 v0.4s, v0.8h, v0.8h > shrn v1.4h, v1.4s, 10 > shrn2 v1.8h, v0.4s, 10 > str q1, [x1, x3] > add x3, x3, 16 > cmp x4, x3 > bne .L4 > > instead of: > > .L4: > ldr q0, [x0, x3] > umull v1.4s, v0.4h, v0.4h > umull2 v0.4s, v0.8h, v0.8h > sshr v1.4s, v1.4s, 10 > sshr v0.4s, v0.4s, 10 > xtn v1.4h, v1.4s > xtn2 v1.8h, v0.4s > str q1, [x1, x3] > add x3, x3, 16 > cmp x4, x3 > bne .L4 > > Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. > > Ok for master? > > Thanks, > Tamar > > gcc/ChangeLog: > > * config/aarch64/aarch64-simd.md > (*aarch64_<srn_op>shrn<mode>_vect, > *aarch64_<srn_op>shrn<mode>2_vect): New. > * config/aarch64/iterators.md (srn_op): New. > > gcc/testsuite/ChangeLog: > > * gcc.target/aarch64/shrn-combine.c: New test. > > --- inline copy of patch -- > diff --git a/gcc/config/aarch64/aarch64-simd.md > b/gcc/config/aarch64/aarch64-simd.md > index > 48eddf64e05afe3788abfa05141f6544a9323ea1..d7b6cae424622d259f97a3d5 > fa9093c0fb0bd5ce 100644 > --- a/gcc/config/aarch64/aarch64-simd.md > +++ b/gcc/config/aarch64/aarch64-simd.md > @@ -1818,6 +1818,28 @@ (define_insn "aarch64_shrn<mode>_insn_be" > [(set_attr "type" "neon_shift_imm_narrow_q")] > ) > > +(define_insn "*aarch64_<srn_op>shrn<mode>_vect" > + [(set (match_operand:<VNARROWQ> 0 "register_operand" "=w") > + (truncate:<VNARROWQ> > + (SHIFTRT:VQN (match_operand:VQN 1 "register_operand" "w") > + (match_operand:VQN 2 > "aarch64_simd_shift_imm_vec_<vn_mode>"))))] > + "TARGET_SIMD" > + "shrn\\t%0.<Vntype>, %1.<Vtype>, %2" > + [(set_attr "type" "neon_shift_imm_narrow_q")] > +) > + > +(define_insn "*aarch64_<srn_op>shrn<mode>2_vect" > + [(set (match_operand:<VNARROWQ2> 0 "register_operand" "=w") > + (vec_concat:<VNARROWQ2> > + (match_operand:<VNARROWQ> 1 "register_operand" "0") > + (truncate:<VNARROWQ> > + (SHIFTRT:VQN (match_operand:VQN 2 "register_operand" "w") > + (match_operand:VQN 3 > "aarch64_simd_shift_imm_vec_<vn_mode>")))))] > + "TARGET_SIMD" > + "shrn2\\t%0.<V2ntype>, %2.<Vtype>, %3" > + [(set_attr "type" "neon_shift_imm_narrow_q")] > +) I think this needs to be guarded on !BYTES_BIG_ENDIAN and a similar pattern added for BYTES_BIG_ENDIAN with the vec_concat operands swapped around. This is similar to the aarch64_xtn2<mode>_insn_be pattern, for example. Thanks, Kyrill > + > (define_expand "aarch64_shrn<mode>" > [(set (match_operand:<VNARROWQ> 0 "register_operand") > (truncate:<VNARROWQ> > diff --git a/gcc/config/aarch64/iterators.md > b/gcc/config/aarch64/iterators.md > index > caa42f8f169fbf2cf46a90cf73dee05619acc300..8dbeed3b0d4a44cdc17dd333e > d397b39a33f386a 100644 > --- a/gcc/config/aarch64/iterators.md > +++ b/gcc/config/aarch64/iterators.md > @@ -2003,6 +2003,9 @@ (define_code_attr shift [(ashift "lsl") (ashiftrt "asr") > ;; Op prefix for shift right and accumulate. > (define_code_attr sra_op [(ashiftrt "s") (lshiftrt "u")]) > > +;; op prefix for shift right and narrow. > +(define_code_attr srn_op [(ashiftrt "r") (lshiftrt "")]) > + > ;; Map shift operators onto underlying bit-field instructions > (define_code_attr bfshift [(ashift "ubfiz") (ashiftrt "sbfx") > (lshiftrt "ubfx") (rotatert "extr")]) > diff --git a/gcc/testsuite/gcc.target/aarch64/shrn-combine.c > b/gcc/testsuite/gcc.target/aarch64/shrn-combine.c > new file mode 100644 > index > 0000000000000000000000000000000000000000..0187f49f4dcc76182c90366c > aaf00d294e835707 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/shrn-combine.c > @@ -0,0 +1,14 @@ > +/* { dg-do assemble } */ > +/* { dg-options "-O3 --save-temps --param=vect-epilogues-nomask=0" } */ > + > +typedef short int16_t; > +typedef unsigned short uint16_t; > + > +void foo (uint16_t * restrict a, int16_t * restrict d, int n) > +{ > + for( int i = 0; i < n; i++ ) > + d[i] = (a[i] * a[i]) >> 10; > +} > + > +/* { dg-final { scan-assembler-times {\tshrn\t} 1 } } */ > +/* { dg-final { scan-assembler-times {\tshrn2\t} 1 } } */ > > > --
(Nice optimisations!) Kyrylo Tkachov <Kyrylo.Tkachov@arm.com> writes: > Hi Tamar, > >> -----Original Message----- >> From: Tamar Christina <Tamar.Christina@arm.com> >> Sent: Wednesday, September 29, 2021 5:19 PM >> To: gcc-patches@gcc.gnu.org >> Cc: nd <nd@arm.com>; Richard Earnshaw <Richard.Earnshaw@arm.com>; >> Marcus Shawcroft <Marcus.Shawcroft@arm.com>; Kyrylo Tkachov >> <Kyrylo.Tkachov@arm.com>; Richard Sandiford >> <Richard.Sandiford@arm.com> >> Subject: [PATCH 1/7]AArch64 Add combine patterns for right shift and >> narrow >> >> Hi All, >> >> This adds a simple pattern for combining right shifts and narrows into >> shifted narrows. >> >> i.e. >> >> typedef short int16_t; >> typedef unsigned short uint16_t; >> >> void foo (uint16_t * restrict a, int16_t * restrict d, int n) >> { >> for( int i = 0; i < n; i++ ) >> d[i] = (a[i] * a[i]) >> 10; >> } >> >> now generates: >> >> .L4: >> ldr q0, [x0, x3] >> umull v1.4s, v0.4h, v0.4h >> umull2 v0.4s, v0.8h, v0.8h >> shrn v1.4h, v1.4s, 10 >> shrn2 v1.8h, v0.4s, 10 >> str q1, [x1, x3] >> add x3, x3, 16 >> cmp x4, x3 >> bne .L4 >> >> instead of: >> >> .L4: >> ldr q0, [x0, x3] >> umull v1.4s, v0.4h, v0.4h >> umull2 v0.4s, v0.8h, v0.8h >> sshr v1.4s, v1.4s, 10 >> sshr v0.4s, v0.4s, 10 >> xtn v1.4h, v1.4s >> xtn2 v1.8h, v0.4s >> str q1, [x1, x3] >> add x3, x3, 16 >> cmp x4, x3 >> bne .L4 >> >> Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. >> >> Ok for master? >> >> Thanks, >> Tamar >> >> gcc/ChangeLog: >> >> * config/aarch64/aarch64-simd.md >> (*aarch64_<srn_op>shrn<mode>_vect, >> *aarch64_<srn_op>shrn<mode>2_vect): New. >> * config/aarch64/iterators.md (srn_op): New. >> >> gcc/testsuite/ChangeLog: >> >> * gcc.target/aarch64/shrn-combine.c: New test. >> >> --- inline copy of patch -- >> diff --git a/gcc/config/aarch64/aarch64-simd.md >> b/gcc/config/aarch64/aarch64-simd.md >> index >> 48eddf64e05afe3788abfa05141f6544a9323ea1..d7b6cae424622d259f97a3d5 >> fa9093c0fb0bd5ce 100644 >> --- a/gcc/config/aarch64/aarch64-simd.md >> +++ b/gcc/config/aarch64/aarch64-simd.md >> @@ -1818,6 +1818,28 @@ (define_insn "aarch64_shrn<mode>_insn_be" >> [(set_attr "type" "neon_shift_imm_narrow_q")] >> ) >> >> +(define_insn "*aarch64_<srn_op>shrn<mode>_vect" >> + [(set (match_operand:<VNARROWQ> 0 "register_operand" "=w") >> + (truncate:<VNARROWQ> >> + (SHIFTRT:VQN (match_operand:VQN 1 "register_operand" "w") >> + (match_operand:VQN 2 >> "aarch64_simd_shift_imm_vec_<vn_mode>"))))] >> + "TARGET_SIMD" >> + "shrn\\t%0.<Vntype>, %1.<Vtype>, %2" >> + [(set_attr "type" "neon_shift_imm_narrow_q")] >> +) >> + >> +(define_insn "*aarch64_<srn_op>shrn<mode>2_vect" >> + [(set (match_operand:<VNARROWQ2> 0 "register_operand" "=w") >> + (vec_concat:<VNARROWQ2> >> + (match_operand:<VNARROWQ> 1 "register_operand" "0") >> + (truncate:<VNARROWQ> >> + (SHIFTRT:VQN (match_operand:VQN 2 "register_operand" "w") >> + (match_operand:VQN 3 >> "aarch64_simd_shift_imm_vec_<vn_mode>")))))] >> + "TARGET_SIMD" >> + "shrn2\\t%0.<V2ntype>, %2.<Vtype>, %3" >> + [(set_attr "type" "neon_shift_imm_narrow_q")] >> +) > > I think this needs to be guarded on !BYTES_BIG_ENDIAN and a similar pattern added for BYTES_BIG_ENDIAN with the vec_concat operands swapped around. > This is similar to the aarch64_xtn2<mode>_insn_be pattern, for example. Yeah. I think that applies to 2/7 and 4/7 too. Thanks, Richard
Hi All, Here's a new version with big-endian support and more tests > > > > I think this needs to be guarded on !BYTES_BIG_ENDIAN and a similar > pattern added for BYTES_BIG_ENDIAN with the vec_concat operands > swapped around. > > This is similar to the aarch64_xtn2<mode>_insn_be pattern, for example. > > Yeah. I think that applies to 2/7 and 4/7 too. > Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. Ok for master? Thanks, Tamar gcc/ChangeLog: * config/aarch64/aarch64-simd.md (*aarch64_<srn_op>shrn<mode>_vect, *aarch64_<srn_op>shrn<mode>2_vect_le, *aarch64_<srn_op>shrn<mode>2_vect_be): New. * config/aarch64/iterators.md (srn_op): New. gcc/testsuite/ChangeLog: * gcc.target/aarch64/shrn-combine-1.c: New test. * gcc.target/aarch64/shrn-combine-2.c: New test. * gcc.target/aarch64/shrn-combine-3.c: New test. * gcc.target/aarch64/shrn-combine-4.c: New test. --- inline copy of patch --- diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md index 48eddf64e05afe3788abfa05141f6544a9323ea1..5715db4e1e1386e724e4d4defd5e5ed9efd8a874 100644 --- a/gcc/config/aarch64/aarch64-simd.md +++ b/gcc/config/aarch64/aarch64-simd.md @@ -1818,6 +1818,40 @@ (define_insn "aarch64_shrn<mode>_insn_be" [(set_attr "type" "neon_shift_imm_narrow_q")] ) +(define_insn "*aarch64_<srn_op>shrn<mode>_vect" + [(set (match_operand:<VNARROWQ> 0 "register_operand" "=w") + (truncate:<VNARROWQ> + (SHIFTRT:VQN (match_operand:VQN 1 "register_operand" "w") + (match_operand:VQN 2 "aarch64_simd_shift_imm_vec_<vn_mode>"))))] + "TARGET_SIMD" + "shrn\\t%0.<Vntype>, %1.<Vtype>, %2" + [(set_attr "type" "neon_shift_imm_narrow_q")] +) + +(define_insn "*aarch64_<srn_op>shrn<mode>2_vect_le" + [(set (match_operand:<VNARROWQ2> 0 "register_operand" "=w") + (vec_concat:<VNARROWQ2> + (match_operand:<VNARROWQ> 1 "register_operand" "0") + (truncate:<VNARROWQ> + (SHIFTRT:VQN (match_operand:VQN 2 "register_operand" "w") + (match_operand:VQN 3 "aarch64_simd_shift_imm_vec_<vn_mode>")))))] + "TARGET_SIMD && !BYTES_BIG_ENDIAN" + "shrn2\\t%0.<V2ntype>, %2.<Vtype>, %3" + [(set_attr "type" "neon_shift_imm_narrow_q")] +) + +(define_insn "*aarch64_<srn_op>shrn<mode>2_vect_be" + [(set (match_operand:<VNARROWQ2> 0 "register_operand" "=w") + (vec_concat:<VNARROWQ2> + (truncate:<VNARROWQ> + (SHIFTRT:VQN (match_operand:VQN 2 "register_operand" "w") + (match_operand:VQN 3 "aarch64_simd_shift_imm_vec_<vn_mode>"))) + (match_operand:<VNARROWQ> 1 "register_operand" "0")))] + "TARGET_SIMD && BYTES_BIG_ENDIAN" + "shrn2\\t%0.<V2ntype>, %2.<Vtype>, %3" + [(set_attr "type" "neon_shift_imm_narrow_q")] +) + (define_expand "aarch64_shrn<mode>" [(set (match_operand:<VNARROWQ> 0 "register_operand") (truncate:<VNARROWQ> diff --git a/gcc/config/aarch64/iterators.md b/gcc/config/aarch64/iterators.md index caa42f8f169fbf2cf46a90cf73dee05619acc300..8dbeed3b0d4a44cdc17dd333ed397b39a33f386a 100644 --- a/gcc/config/aarch64/iterators.md +++ b/gcc/config/aarch64/iterators.md @@ -2003,6 +2003,9 @@ (define_code_attr shift [(ashift "lsl") (ashiftrt "asr") ;; Op prefix for shift right and accumulate. (define_code_attr sra_op [(ashiftrt "s") (lshiftrt "u")]) +;; op prefix for shift right and narrow. +(define_code_attr srn_op [(ashiftrt "r") (lshiftrt "")]) + ;; Map shift operators onto underlying bit-field instructions (define_code_attr bfshift [(ashift "ubfiz") (ashiftrt "sbfx") (lshiftrt "ubfx") (rotatert "extr")]) diff --git a/gcc/testsuite/gcc.target/aarch64/shrn-combine-1.c b/gcc/testsuite/gcc.target/aarch64/shrn-combine-1.c new file mode 100644 index 0000000000000000000000000000000000000000..a28524662edca8eb149e34c2242091b51a167b71 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/shrn-combine-1.c @@ -0,0 +1,13 @@ +/* { dg-do assemble } */ +/* { dg-options "-O3 --save-temps --param=vect-epilogues-nomask=0" } */ + +#define TYPE char + +void foo (unsigned TYPE * restrict a, TYPE * restrict d, int n) +{ + for( int i = 0; i < n; i++ ) + d[i] = (a[i] * a[i]) >> 2; +} + +/* { dg-final { scan-assembler-times {\tshrn\t} 1 } } */ +/* { dg-final { scan-assembler-times {\tshrn2\t} 1 } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/shrn-combine-2.c b/gcc/testsuite/gcc.target/aarch64/shrn-combine-2.c new file mode 100644 index 0000000000000000000000000000000000000000..012135b424f98abadc480e7ef13fcab080d99c28 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/shrn-combine-2.c @@ -0,0 +1,13 @@ +/* { dg-do assemble } */ +/* { dg-options "-O3 --save-temps --param=vect-epilogues-nomask=0" } */ + +#define TYPE short + +void foo (unsigned TYPE * restrict a, TYPE * restrict d, int n) +{ + for( int i = 0; i < n; i++ ) + d[i] = (a[i] * a[i]) >> 2; +} + +/* { dg-final { scan-assembler-times {\tshrn\t} 1 } } */ +/* { dg-final { scan-assembler-times {\tshrn2\t} 1 } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/shrn-combine-3.c b/gcc/testsuite/gcc.target/aarch64/shrn-combine-3.c new file mode 100644 index 0000000000000000000000000000000000000000..8b5b360de623b0ada0da1531795ba6b428c7f9e1 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/shrn-combine-3.c @@ -0,0 +1,13 @@ +/* { dg-do assemble } */ +/* { dg-options "-O3 --save-temps --param=vect-epilogues-nomask=0" } */ + +#define TYPE int + +void foo (unsigned long long * restrict a, TYPE * restrict d, int n) +{ + for( int i = 0; i < n; i++ ) + d[i] = a[i] >> 3; +} + +/* { dg-final { scan-assembler-times {\tshrn\t} 1 } } */ +/* { dg-final { scan-assembler-times {\tshrn2\t} 1 } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/shrn-combine-4.c b/gcc/testsuite/gcc.target/aarch64/shrn-combine-4.c new file mode 100644 index 0000000000000000000000000000000000000000..fedca7621e2a82df0df9d12b91c5c0c9fd3dfc60 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/shrn-combine-4.c @@ -0,0 +1,13 @@ +/* { dg-do assemble } */ +/* { dg-options "-O3 --save-temps --param=vect-epilogues-nomask=0" } */ + +#define TYPE long long + +void foo (unsigned TYPE * restrict a, TYPE * restrict d, int n) +{ + for( int i = 0; i < n; i++ ) + d[i] = (a[i] * a[i]) >> 2; +} + +/* { dg-final { scan-assembler-not {\tshrn\t} } } */ +/* { dg-final { scan-assembler-not {\tshrn2\t} } } */
> -----Original Message----- > From: Tamar Christina <Tamar.Christina@arm.com> > Sent: Tuesday, October 12, 2021 5:18 PM > To: Richard Sandiford <Richard.Sandiford@arm.com>; Kyrylo Tkachov > <Kyrylo.Tkachov@arm.com> > Cc: gcc-patches@gcc.gnu.org; nd <nd@arm.com>; Richard Earnshaw > <Richard.Earnshaw@arm.com>; Marcus Shawcroft > <Marcus.Shawcroft@arm.com> > Subject: RE: [PATCH 1/7]AArch64 Add combine patterns for right shift and > narrow > > Hi All, > > Here's a new version with big-endian support and more tests > > > > > > > I think this needs to be guarded on !BYTES_BIG_ENDIAN and a similar > > pattern added for BYTES_BIG_ENDIAN with the vec_concat operands > > swapped around. > > > This is similar to the aarch64_xtn2<mode>_insn_be pattern, for example. > > > > Yeah. I think that applies to 2/7 and 4/7 too. > > > > Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. > > Ok for master? Ok. Thanks, Kyrill > > Thanks, > Tamar > > gcc/ChangeLog: > > * config/aarch64/aarch64-simd.md > (*aarch64_<srn_op>shrn<mode>_vect, > *aarch64_<srn_op>shrn<mode>2_vect_le, > *aarch64_<srn_op>shrn<mode>2_vect_be): New. > * config/aarch64/iterators.md (srn_op): New. > > gcc/testsuite/ChangeLog: > > * gcc.target/aarch64/shrn-combine-1.c: New test. > * gcc.target/aarch64/shrn-combine-2.c: New test. > * gcc.target/aarch64/shrn-combine-3.c: New test. > * gcc.target/aarch64/shrn-combine-4.c: New test. > > --- inline copy of patch --- > > diff --git a/gcc/config/aarch64/aarch64-simd.md > b/gcc/config/aarch64/aarch64-simd.md > index > 48eddf64e05afe3788abfa05141f6544a9323ea1..5715db4e1e1386e724e4d4d > efd5e5ed9efd8a874 100644 > --- a/gcc/config/aarch64/aarch64-simd.md > +++ b/gcc/config/aarch64/aarch64-simd.md > @@ -1818,6 +1818,40 @@ (define_insn "aarch64_shrn<mode>_insn_be" > [(set_attr "type" "neon_shift_imm_narrow_q")] > ) > > +(define_insn "*aarch64_<srn_op>shrn<mode>_vect" > + [(set (match_operand:<VNARROWQ> 0 "register_operand" "=w") > + (truncate:<VNARROWQ> > + (SHIFTRT:VQN (match_operand:VQN 1 "register_operand" "w") > + (match_operand:VQN 2 > "aarch64_simd_shift_imm_vec_<vn_mode>"))))] > + "TARGET_SIMD" > + "shrn\\t%0.<Vntype>, %1.<Vtype>, %2" > + [(set_attr "type" "neon_shift_imm_narrow_q")] > +) > + > +(define_insn "*aarch64_<srn_op>shrn<mode>2_vect_le" > + [(set (match_operand:<VNARROWQ2> 0 "register_operand" "=w") > + (vec_concat:<VNARROWQ2> > + (match_operand:<VNARROWQ> 1 "register_operand" "0") > + (truncate:<VNARROWQ> > + (SHIFTRT:VQN (match_operand:VQN 2 "register_operand" "w") > + (match_operand:VQN 3 > "aarch64_simd_shift_imm_vec_<vn_mode>")))))] > + "TARGET_SIMD && !BYTES_BIG_ENDIAN" > + "shrn2\\t%0.<V2ntype>, %2.<Vtype>, %3" > + [(set_attr "type" "neon_shift_imm_narrow_q")] > +) > + > +(define_insn "*aarch64_<srn_op>shrn<mode>2_vect_be" > + [(set (match_operand:<VNARROWQ2> 0 "register_operand" "=w") > + (vec_concat:<VNARROWQ2> > + (truncate:<VNARROWQ> > + (SHIFTRT:VQN (match_operand:VQN 2 "register_operand" "w") > + (match_operand:VQN 3 > "aarch64_simd_shift_imm_vec_<vn_mode>"))) > + (match_operand:<VNARROWQ> 1 "register_operand" "0")))] > + "TARGET_SIMD && BYTES_BIG_ENDIAN" > + "shrn2\\t%0.<V2ntype>, %2.<Vtype>, %3" > + [(set_attr "type" "neon_shift_imm_narrow_q")] > +) > + > (define_expand "aarch64_shrn<mode>" > [(set (match_operand:<VNARROWQ> 0 "register_operand") > (truncate:<VNARROWQ> > diff --git a/gcc/config/aarch64/iterators.md > b/gcc/config/aarch64/iterators.md > index > caa42f8f169fbf2cf46a90cf73dee05619acc300..8dbeed3b0d4a44cdc17dd333e > d397b39a33f386a 100644 > --- a/gcc/config/aarch64/iterators.md > +++ b/gcc/config/aarch64/iterators.md > @@ -2003,6 +2003,9 @@ (define_code_attr shift [(ashift "lsl") (ashiftrt "asr") > ;; Op prefix for shift right and accumulate. > (define_code_attr sra_op [(ashiftrt "s") (lshiftrt "u")]) > > +;; op prefix for shift right and narrow. > +(define_code_attr srn_op [(ashiftrt "r") (lshiftrt "")]) > + > ;; Map shift operators onto underlying bit-field instructions > (define_code_attr bfshift [(ashift "ubfiz") (ashiftrt "sbfx") > (lshiftrt "ubfx") (rotatert "extr")]) > diff --git a/gcc/testsuite/gcc.target/aarch64/shrn-combine-1.c > b/gcc/testsuite/gcc.target/aarch64/shrn-combine-1.c > new file mode 100644 > index > 0000000000000000000000000000000000000000..a28524662edca8eb149e34c > 2242091b51a167b71 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/shrn-combine-1.c > @@ -0,0 +1,13 @@ > +/* { dg-do assemble } */ > +/* { dg-options "-O3 --save-temps --param=vect-epilogues-nomask=0" } */ > + > +#define TYPE char > + > +void foo (unsigned TYPE * restrict a, TYPE * restrict d, int n) > +{ > + for( int i = 0; i < n; i++ ) > + d[i] = (a[i] * a[i]) >> 2; > +} > + > +/* { dg-final { scan-assembler-times {\tshrn\t} 1 } } */ > +/* { dg-final { scan-assembler-times {\tshrn2\t} 1 } } */ > diff --git a/gcc/testsuite/gcc.target/aarch64/shrn-combine-2.c > b/gcc/testsuite/gcc.target/aarch64/shrn-combine-2.c > new file mode 100644 > index > 0000000000000000000000000000000000000000..012135b424f98abadc480e7 > ef13fcab080d99c28 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/shrn-combine-2.c > @@ -0,0 +1,13 @@ > +/* { dg-do assemble } */ > +/* { dg-options "-O3 --save-temps --param=vect-epilogues-nomask=0" } */ > + > +#define TYPE short > + > +void foo (unsigned TYPE * restrict a, TYPE * restrict d, int n) > +{ > + for( int i = 0; i < n; i++ ) > + d[i] = (a[i] * a[i]) >> 2; > +} > + > +/* { dg-final { scan-assembler-times {\tshrn\t} 1 } } */ > +/* { dg-final { scan-assembler-times {\tshrn2\t} 1 } } */ > diff --git a/gcc/testsuite/gcc.target/aarch64/shrn-combine-3.c > b/gcc/testsuite/gcc.target/aarch64/shrn-combine-3.c > new file mode 100644 > index > 0000000000000000000000000000000000000000..8b5b360de623b0ada0da15 > 31795ba6b428c7f9e1 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/shrn-combine-3.c > @@ -0,0 +1,13 @@ > +/* { dg-do assemble } */ > +/* { dg-options "-O3 --save-temps --param=vect-epilogues-nomask=0" } */ > + > +#define TYPE int > + > +void foo (unsigned long long * restrict a, TYPE * restrict d, int n) > +{ > + for( int i = 0; i < n; i++ ) > + d[i] = a[i] >> 3; > +} > + > +/* { dg-final { scan-assembler-times {\tshrn\t} 1 } } */ > +/* { dg-final { scan-assembler-times {\tshrn2\t} 1 } } */ > diff --git a/gcc/testsuite/gcc.target/aarch64/shrn-combine-4.c > b/gcc/testsuite/gcc.target/aarch64/shrn-combine-4.c > new file mode 100644 > index > 0000000000000000000000000000000000000000..fedca7621e2a82df0df9d12 > b91c5c0c9fd3dfc60 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/shrn-combine-4.c > @@ -0,0 +1,13 @@ > +/* { dg-do assemble } */ > +/* { dg-options "-O3 --save-temps --param=vect-epilogues-nomask=0" } */ > + > +#define TYPE long long > + > +void foo (unsigned TYPE * restrict a, TYPE * restrict d, int n) > +{ > + for( int i = 0; i < n; i++ ) > + d[i] = (a[i] * a[i]) >> 2; > +} > + > +/* { dg-final { scan-assembler-not {\tshrn\t} } } */ > +/* { dg-final { scan-assembler-not {\tshrn2\t} } } */
diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md index 48eddf64e05afe3788abfa05141f6544a9323ea1..d7b6cae424622d259f97a3d5fa9093c0fb0bd5ce 100644 --- a/gcc/config/aarch64/aarch64-simd.md +++ b/gcc/config/aarch64/aarch64-simd.md @@ -1818,6 +1818,28 @@ (define_insn "aarch64_shrn<mode>_insn_be" [(set_attr "type" "neon_shift_imm_narrow_q")] ) +(define_insn "*aarch64_<srn_op>shrn<mode>_vect" + [(set (match_operand:<VNARROWQ> 0 "register_operand" "=w") + (truncate:<VNARROWQ> + (SHIFTRT:VQN (match_operand:VQN 1 "register_operand" "w") + (match_operand:VQN 2 "aarch64_simd_shift_imm_vec_<vn_mode>"))))] + "TARGET_SIMD" + "shrn\\t%0.<Vntype>, %1.<Vtype>, %2" + [(set_attr "type" "neon_shift_imm_narrow_q")] +) + +(define_insn "*aarch64_<srn_op>shrn<mode>2_vect" + [(set (match_operand:<VNARROWQ2> 0 "register_operand" "=w") + (vec_concat:<VNARROWQ2> + (match_operand:<VNARROWQ> 1 "register_operand" "0") + (truncate:<VNARROWQ> + (SHIFTRT:VQN (match_operand:VQN 2 "register_operand" "w") + (match_operand:VQN 3 "aarch64_simd_shift_imm_vec_<vn_mode>")))))] + "TARGET_SIMD" + "shrn2\\t%0.<V2ntype>, %2.<Vtype>, %3" + [(set_attr "type" "neon_shift_imm_narrow_q")] +) + (define_expand "aarch64_shrn<mode>" [(set (match_operand:<VNARROWQ> 0 "register_operand") (truncate:<VNARROWQ> diff --git a/gcc/config/aarch64/iterators.md b/gcc/config/aarch64/iterators.md index caa42f8f169fbf2cf46a90cf73dee05619acc300..8dbeed3b0d4a44cdc17dd333ed397b39a33f386a 100644 --- a/gcc/config/aarch64/iterators.md +++ b/gcc/config/aarch64/iterators.md @@ -2003,6 +2003,9 @@ (define_code_attr shift [(ashift "lsl") (ashiftrt "asr") ;; Op prefix for shift right and accumulate. (define_code_attr sra_op [(ashiftrt "s") (lshiftrt "u")]) +;; op prefix for shift right and narrow. +(define_code_attr srn_op [(ashiftrt "r") (lshiftrt "")]) + ;; Map shift operators onto underlying bit-field instructions (define_code_attr bfshift [(ashift "ubfiz") (ashiftrt "sbfx") (lshiftrt "ubfx") (rotatert "extr")]) diff --git a/gcc/testsuite/gcc.target/aarch64/shrn-combine.c b/gcc/testsuite/gcc.target/aarch64/shrn-combine.c new file mode 100644 index 0000000000000000000000000000000000000000..0187f49f4dcc76182c90366caaf00d294e835707 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/shrn-combine.c @@ -0,0 +1,14 @@ +/* { dg-do assemble } */ +/* { dg-options "-O3 --save-temps --param=vect-epilogues-nomask=0" } */ + +typedef short int16_t; +typedef unsigned short uint16_t; + +void foo (uint16_t * restrict a, int16_t * restrict d, int n) +{ + for( int i = 0; i < n; i++ ) + d[i] = (a[i] * a[i]) >> 10; +} + +/* { dg-final { scan-assembler-times {\tshrn\t} 1 } } */ +/* { dg-final { scan-assembler-times {\tshrn2\t} 1 } } */