diff mbox series

middle-end/114189 - drop uses of vcond{,u,eq}_optab

Message ID 20240614103115.1DB0213AB1@imap1.dmz-prg2.suse.org
State New
Headers show
Series middle-end/114189 - drop uses of vcond{,u,eq}_optab | expand

Commit Message

Richard Biener June 14, 2024, 10:31 a.m. UTC
The following retires vcond{,u,eq} optabs by stopping to use them
from the middle-end.  Targets instead (should) implement vcond_mask
and vec_cmp{,u,eq} optabs.  The PR this change refers to lists
possibly affected targets - those implementing these patterns,
and in particular it lists mips, sparc and ia64 as targets that
most definitely will regress while others might simply remove
their vcond{,u,eq} patterns.

I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
I know riscv doesn't implement any of the legacy optabs.  But less
maintained vector targets might need adjustments.

I want to get rid of those optabs for GCC 15.  If I don't hear from
you I will assume your target is fine.

Thanks,
Richard.

	PR middle-end/114189
	* optabs-query.h (get_vcond_icode): Always return CODE_FOR_nothing.
	(get_vcond_eq_icode): Likewise.
---
 gcc/optabs-query.h | 13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

Comments

Richard Sandiford June 14, 2024, 11:03 a.m. UTC | #1
Richard Biener <rguenther@suse.de> writes:
> The following retires vcond{,u,eq} optabs by stopping to use them
> from the middle-end.  Targets instead (should) implement vcond_mask
> and vec_cmp{,u,eq} optabs.  The PR this change refers to lists
> possibly affected targets - those implementing these patterns,
> and in particular it lists mips, sparc and ia64 as targets that
> most definitely will regress while others might simply remove
> their vcond{,u,eq} patterns.
>
> I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
> I know riscv doesn't implement any of the legacy optabs.  But less
> maintained vector targets might need adjustments.
>
> I want to get rid of those optabs for GCC 15.  If I don't hear from
> you I will assume your target is fine.

Great!  Thanks for doing this.

Is there a plan for how we should handle vector comparisons that
have to be done as the inverse of the negated condition?  Should
targets simply not provide vec_cmp for such conditions and leave
the target-independent code to deal with the fallout?  (For a
standalone comparison, it would invert the result.  For a VEC_COND_EXPR
it would swap the true and false values.)

Richard

>
> Thanks,
> Richard.
>
> 	PR middle-end/114189
> 	* optabs-query.h (get_vcond_icode): Always return CODE_FOR_nothing.
> 	(get_vcond_eq_icode): Likewise.
> ---
>  gcc/optabs-query.h | 13 ++++---------
>  1 file changed, 4 insertions(+), 9 deletions(-)
>
> diff --git a/gcc/optabs-query.h b/gcc/optabs-query.h
> index 0cb2c21ba85..31fbce80175 100644
> --- a/gcc/optabs-query.h
> +++ b/gcc/optabs-query.h
> @@ -112,14 +112,9 @@ get_vec_cmp_eq_icode (machine_mode vmode, machine_mode mask_mode)
>     mode CMODE, unsigned if UNS is true, resulting in a value of mode VMODE.  */
>  
>  inline enum insn_code
> -get_vcond_icode (machine_mode vmode, machine_mode cmode, bool uns)
> +get_vcond_icode (machine_mode, machine_mode, bool)
>  {
> -  enum insn_code icode = CODE_FOR_nothing;
> -  if (uns)
> -    icode = convert_optab_handler (vcondu_optab, vmode, cmode);
> -  else
> -    icode = convert_optab_handler (vcond_optab, vmode, cmode);
> -  return icode;
> +  return CODE_FOR_nothing;
>  }
>  
>  /* Return insn code for a conditional operator with a mask mode
> @@ -135,9 +130,9 @@ get_vcond_mask_icode (machine_mode vmode, machine_mode mmode)
>     mode CMODE (only EQ/NE), resulting in a value of mode VMODE.  */
>  
>  inline enum insn_code
> -get_vcond_eq_icode (machine_mode vmode, machine_mode cmode)
> +get_vcond_eq_icode (machine_mode, machine_mode)
>  {
> -  return convert_optab_handler (vcondeq_optab, vmode, cmode);
> +  return CODE_FOR_nothing;
>  }
>  
>  /* Enumerates the possible extraction_insn operations.  */
Richard Biener June 14, 2024, 11:07 a.m. UTC | #2
On Fri, 14 Jun 2024, Richard Sandiford wrote:

> Richard Biener <rguenther@suse.de> writes:
> > The following retires vcond{,u,eq} optabs by stopping to use them
> > from the middle-end.  Targets instead (should) implement vcond_mask
> > and vec_cmp{,u,eq} optabs.  The PR this change refers to lists
> > possibly affected targets - those implementing these patterns,
> > and in particular it lists mips, sparc and ia64 as targets that
> > most definitely will regress while others might simply remove
> > their vcond{,u,eq} patterns.
> >
> > I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
> > I know riscv doesn't implement any of the legacy optabs.  But less
> > maintained vector targets might need adjustments.
> >
> > I want to get rid of those optabs for GCC 15.  If I don't hear from
> > you I will assume your target is fine.
> 
> Great!  Thanks for doing this.
> 
> Is there a plan for how we should handle vector comparisons that
> have to be done as the inverse of the negated condition?  Should
> targets simply not provide vec_cmp for such conditions and leave
> the target-independent code to deal with the fallout?  (For a
> standalone comparison, it would invert the result.  For a VEC_COND_EXPR
> it would swap the true and false values.)

I would expect that the ISEL pass which currently deals with finding
valid combos of .VCMP{,U,EQ} and .VCOND_MASK deals with this.
So how do we deal with this right now?  I expect RTL expansion will
do the inverse trick, no?

Thanks,
Richard.

> Richard
> 
> >
> > Thanks,
> > Richard.
> >
> > 	PR middle-end/114189
> > 	* optabs-query.h (get_vcond_icode): Always return CODE_FOR_nothing.
> > 	(get_vcond_eq_icode): Likewise.
> > ---
> >  gcc/optabs-query.h | 13 ++++---------
> >  1 file changed, 4 insertions(+), 9 deletions(-)
> >
> > diff --git a/gcc/optabs-query.h b/gcc/optabs-query.h
> > index 0cb2c21ba85..31fbce80175 100644
> > --- a/gcc/optabs-query.h
> > +++ b/gcc/optabs-query.h
> > @@ -112,14 +112,9 @@ get_vec_cmp_eq_icode (machine_mode vmode, machine_mode mask_mode)
> >     mode CMODE, unsigned if UNS is true, resulting in a value of mode VMODE.  */
> >  
> >  inline enum insn_code
> > -get_vcond_icode (machine_mode vmode, machine_mode cmode, bool uns)
> > +get_vcond_icode (machine_mode, machine_mode, bool)
> >  {
> > -  enum insn_code icode = CODE_FOR_nothing;
> > -  if (uns)
> > -    icode = convert_optab_handler (vcondu_optab, vmode, cmode);
> > -  else
> > -    icode = convert_optab_handler (vcond_optab, vmode, cmode);
> > -  return icode;
> > +  return CODE_FOR_nothing;
> >  }
> >  
> >  /* Return insn code for a conditional operator with a mask mode
> > @@ -135,9 +130,9 @@ get_vcond_mask_icode (machine_mode vmode, machine_mode mmode)
> >     mode CMODE (only EQ/NE), resulting in a value of mode VMODE.  */
> >  
> >  inline enum insn_code
> > -get_vcond_eq_icode (machine_mode vmode, machine_mode cmode)
> > +get_vcond_eq_icode (machine_mode, machine_mode)
> >  {
> > -  return convert_optab_handler (vcondeq_optab, vmode, cmode);
> > +  return CODE_FOR_nothing;
> >  }
> >  
> >  /* Enumerates the possible extraction_insn operations.  */
>
Richard Sandiford June 14, 2024, 11:11 a.m. UTC | #3
Richard Biener <rguenther@suse.de> writes:
> On Fri, 14 Jun 2024, Richard Sandiford wrote:
>
>> Richard Biener <rguenther@suse.de> writes:
>> > The following retires vcond{,u,eq} optabs by stopping to use them
>> > from the middle-end.  Targets instead (should) implement vcond_mask
>> > and vec_cmp{,u,eq} optabs.  The PR this change refers to lists
>> > possibly affected targets - those implementing these patterns,
>> > and in particular it lists mips, sparc and ia64 as targets that
>> > most definitely will regress while others might simply remove
>> > their vcond{,u,eq} patterns.
>> >
>> > I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
>> > I know riscv doesn't implement any of the legacy optabs.  But less
>> > maintained vector targets might need adjustments.
>> >
>> > I want to get rid of those optabs for GCC 15.  If I don't hear from
>> > you I will assume your target is fine.
>> 
>> Great!  Thanks for doing this.
>> 
>> Is there a plan for how we should handle vector comparisons that
>> have to be done as the inverse of the negated condition?  Should
>> targets simply not provide vec_cmp for such conditions and leave
>> the target-independent code to deal with the fallout?  (For a
>> standalone comparison, it would invert the result.  For a VEC_COND_EXPR
>> it would swap the true and false values.)
>
> I would expect that the ISEL pass which currently deals with finding
> valid combos of .VCMP{,U,EQ} and .VCOND_MASK deals with this.
> So how do we deal with this right now?  I expect RTL expansion will
> do the inverse trick, no?

I think in practice (at least for the targets I've worked on),
the target's vec_cmp handles the inversion itself.  Thus the
main optimisation done by targets' vcond patterns is to avoid
the inversion (and instead swap the true/false values) when the
"opposite" comparison is the native one.

Thanks,
Richard
Richard Biener June 14, 2024, 12:06 p.m. UTC | #4
On Fri, 14 Jun 2024, Richard Sandiford wrote:

> Richard Biener <rguenther@suse.de> writes:
> > On Fri, 14 Jun 2024, Richard Sandiford wrote:
> >
> >> Richard Biener <rguenther@suse.de> writes:
> >> > The following retires vcond{,u,eq} optabs by stopping to use them
> >> > from the middle-end.  Targets instead (should) implement vcond_mask
> >> > and vec_cmp{,u,eq} optabs.  The PR this change refers to lists
> >> > possibly affected targets - those implementing these patterns,
> >> > and in particular it lists mips, sparc and ia64 as targets that
> >> > most definitely will regress while others might simply remove
> >> > their vcond{,u,eq} patterns.
> >> >
> >> > I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
> >> > I know riscv doesn't implement any of the legacy optabs.  But less
> >> > maintained vector targets might need adjustments.
> >> >
> >> > I want to get rid of those optabs for GCC 15.  If I don't hear from
> >> > you I will assume your target is fine.
> >> 
> >> Great!  Thanks for doing this.
> >> 
> >> Is there a plan for how we should handle vector comparisons that
> >> have to be done as the inverse of the negated condition?  Should
> >> targets simply not provide vec_cmp for such conditions and leave
> >> the target-independent code to deal with the fallout?  (For a
> >> standalone comparison, it would invert the result.  For a VEC_COND_EXPR
> >> it would swap the true and false values.)
> >
> > I would expect that the ISEL pass which currently deals with finding
> > valid combos of .VCMP{,U,EQ} and .VCOND_MASK deals with this.
> > So how do we deal with this right now?  I expect RTL expansion will
> > do the inverse trick, no?
> 
> I think in practice (at least for the targets I've worked on),
> the target's vec_cmp handles the inversion itself.  Thus the
> main optimisation done by targets' vcond patterns is to avoid
> the inversion (and instead swap the true/false values) when the
> "opposite" comparison is the native one.

I see.  I suppose whether or not vec_cmp is handled is determined
by a FAIL so it's somewhat difficult to determine this at ISEL time.

Meanwhile I've opened PR115490 with the x86 fallout from applying the
patch.

Richard.
Richard Biener June 14, 2024, 12:10 p.m. UTC | #5
On Fri, 14 Jun 2024, Richard Biener wrote:

> On Fri, 14 Jun 2024, Richard Sandiford wrote:
> 
> > Richard Biener <rguenther@suse.de> writes:
> > > On Fri, 14 Jun 2024, Richard Sandiford wrote:
> > >
> > >> Richard Biener <rguenther@suse.de> writes:
> > >> > The following retires vcond{,u,eq} optabs by stopping to use them
> > >> > from the middle-end.  Targets instead (should) implement vcond_mask
> > >> > and vec_cmp{,u,eq} optabs.  The PR this change refers to lists
> > >> > possibly affected targets - those implementing these patterns,
> > >> > and in particular it lists mips, sparc and ia64 as targets that
> > >> > most definitely will regress while others might simply remove
> > >> > their vcond{,u,eq} patterns.
> > >> >
> > >> > I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
> > >> > I know riscv doesn't implement any of the legacy optabs.  But less
> > >> > maintained vector targets might need adjustments.
> > >> >
> > >> > I want to get rid of those optabs for GCC 15.  If I don't hear from
> > >> > you I will assume your target is fine.
> > >> 
> > >> Great!  Thanks for doing this.
> > >> 
> > >> Is there a plan for how we should handle vector comparisons that
> > >> have to be done as the inverse of the negated condition?  Should
> > >> targets simply not provide vec_cmp for such conditions and leave
> > >> the target-independent code to deal with the fallout?  (For a
> > >> standalone comparison, it would invert the result.  For a VEC_COND_EXPR
> > >> it would swap the true and false values.)
> > >
> > > I would expect that the ISEL pass which currently deals with finding
> > > valid combos of .VCMP{,U,EQ} and .VCOND_MASK deals with this.
> > > So how do we deal with this right now?  I expect RTL expansion will
> > > do the inverse trick, no?
> > 
> > I think in practice (at least for the targets I've worked on),
> > the target's vec_cmp handles the inversion itself.  Thus the
> > main optimisation done by targets' vcond patterns is to avoid
> > the inversion (and instead swap the true/false values) when the
> > "opposite" comparison is the native one.
> 
> I see.  I suppose whether or not vec_cmp is handled is determined
> by a FAIL so it's somewhat difficult to determine this at ISEL time.

I'll also note that we document vec_cmp{,u,eq} as having all zeros,
all ones for the result while vcond_mask might only care for the MSB
(it's documented to work on the result of a pre-computed vector
comparison).

So this eventually asks for targets to work out the optimal sequence
via combine helpers and thus eventually splitters to fixup invalid
compare operators late?

Richard.
Hongtao Liu June 14, 2024, 2:53 p.m. UTC | #6
On Fri, Jun 14, 2024 at 6:31 PM Richard Biener <rguenther@suse.de> wrote:
>
> The following retires vcond{,u,eq} optabs by stopping to use them
> from the middle-end.  Targets instead (should) implement vcond_mask
> and vec_cmp{,u,eq} optabs.  The PR this change refers to lists
> possibly affected targets - those implementing these patterns,
> and in particular it lists mips, sparc and ia64 as targets that
> most definitely will regress while others might simply remove
> their vcond{,u,eq} patterns.
>
> I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
> I know riscv doesn't implement any of the legacy optabs.  But less
> maintained vector targets might need adjustments.
>
At GCC14, I tried to remove these expanders in the x86 backend, and it
regressed some testcases, mainly because of the optimizations we did
in ix86_expand_{int,fp}_vcond.
I've started testing your patch, it's possible that we still need to
move the ix86_expand_{int,fp}_vcond optimizations to the
middle-end(isel or match.pd)or add extra patterns to handle it at the
rtl pas_combine.
Xi Ruoyao June 14, 2024, 6:33 p.m. UTC | #7
On Fri, 2024-06-14 at 12:31 +0200, Richard Biener wrote:
> The following retires vcond{,u,eq} optabs by stopping to use them
> from the middle-end.  Targets instead (should) implement vcond_mask
> and vec_cmp{,u,eq} optabs.  The PR this change refers to lists
> possibly affected targets - those implementing these patterns,
> and in particular it lists mips, sparc and ia64 as targets that
> most definitely will regress while others might simply remove
> their vcond{,u,eq} patterns.
> 
> I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
> I know riscv doesn't implement any of the legacy optabs.  But less
> maintained vector targets might need adjustments.

No new test failures on LoongArch.
Hongtao Liu June 17, 2024, 2:59 a.m. UTC | #8
On Fri, Jun 14, 2024 at 10:53 PM Hongtao Liu <crazylht@gmail.com> wrote:
>
> On Fri, Jun 14, 2024 at 6:31 PM Richard Biener <rguenther@suse.de> wrote:
> >
> > The following retires vcond{,u,eq} optabs by stopping to use them
> > from the middle-end.  Targets instead (should) implement vcond_mask
> > and vec_cmp{,u,eq} optabs.  The PR this change refers to lists
> > possibly affected targets - those implementing these patterns,
> > and in particular it lists mips, sparc and ia64 as targets that
> > most definitely will regress while others might simply remove
> > their vcond{,u,eq} patterns.
> >
> > I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
> > I know riscv doesn't implement any of the legacy optabs.  But less
> > maintained vector targets might need adjustments.
> >
> At GCC14, I tried to remove these expanders in the x86 backend, and it
> regressed some testcases, mainly because of the optimizations we did
> in ix86_expand_{int,fp}_vcond.
> I've started testing your patch, it's possible that we still need to
> move the ix86_expand_{int,fp}_vcond optimizations to the
> middle-end(isel or match.pd)or add extra patterns to handle it at the
> rtl pas_combine.
These are new failures I got

g++: g++.target/i386/avx-pr54700-1.C   scan-assembler-not vpcmpgt[bdq]

g++: g++.target/i386/avx-pr54700-1.C   scan-assembler-times vblendvpd 4

g++: g++.target/i386/avx-pr54700-1.C   scan-assembler-times vblendvps 4

g++: g++.target/i386/avx-pr54700-1.C   scan-assembler-times vpblendvb 2

g++: g++.target/i386/avx2-pr54700-1.C   scan-assembler-not vpcmpgt[bdq]

g++: g++.target/i386/avx2-pr54700-1.C   scan-assembler-times vblendvpd 4

g++: g++.target/i386/avx2-pr54700-1.C   scan-assembler-times vblendvps 4

g++: g++.target/i386/avx2-pr54700-1.C   scan-assembler-times vpblendvb 2

g++: g++.target/i386/avx512fp16-vcondmn-minmax.C  -std=gnu++14

g++scan-assembler-times vmaxph 3

g++: g++.target/i386/avx512fp16-vcondmn-minmax.C  -std=gnu++14

g++scan-assembler-times vminph 3

g++: g++.target/i386/avx512fp16-vcondmn-minmax.C  -std=gnu++17

g++scan-assembler-times vmaxph 3

g++: g++.target/i386/avx512fp16-vcondmn-minmax.C  -std=gnu++17

g++scan-assembler-times vminph 3

g++: g++.target/i386/avx512fp16-vcondmn-minmax.C  -std=gnu++20

g++scan-assembler-times vmaxph 3

g++: g++.target/i386/avx512fp16-vcondmn-minmax.C  -std=gnu++20

g++scan-assembler-times vminph 3

g++: g++.target/i386/avx512fp16-vcondmn-minmax.C  -std=gnu++98

g++scan-assembler-times vmaxph 3

g++: g++.target/i386/avx512fp16-vcondmn-minmax.C  -std=gnu++98

g++scan-assembler-times vminph 3

g++: g++.target/i386/pr100637-1b.C  -std=gnu++14  scan-assembler-times

g++pcmpeqb 2

g++: g++.target/i386/pr100637-1b.C  -std=gnu++17  scan-assembler-times

g++pcmpeqb 2

g++: g++.target/i386/pr100637-1b.C  -std=gnu++20  scan-assembler-times

g++pcmpeqb 2

g++: g++.target/i386/pr100637-1b.C  -std=gnu++98  scan-assembler-times

g++pcmpeqb 2

g++: g++.target/i386/pr100637-1w.C  -std=gnu++14  scan-assembler-times

g++pcmpeqw 2

g++: g++.target/i386/pr100637-1w.C  -std=gnu++17  scan-assembler-times

g++pcmpeqw 2

g++: g++.target/i386/pr100637-1w.C  -std=gnu++20  scan-assembler-times

g++pcmpeqw 2

g++: g++.target/i386/pr100637-1w.C  -std=gnu++98  scan-assembler-times

g++pcmpeqw 2

g++: g++.target/i386/pr100738-1.C  -std=gnu++14  scan-assembler-not

g++vpcmpeqd[ \\t]

g++: g++.target/i386/pr100738-1.C  -std=gnu++14  scan-assembler-not

g++vpxor[ \\t]

g++: g++.target/i386/pr100738-1.C  -std=gnu++14  scan-assembler-times

g++vblendvps[ \\t] 2

g++: g++.target/i386/pr100738-1.C  -std=gnu++17  scan-assembler-not

g++vpcmpeqd[ \\t]

g++: g++.target/i386/pr100738-1.C  -std=gnu++17  scan-assembler-not

g++vpxor[ \\t]

g++: g++.target/i386/pr100738-1.C  -std=gnu++17  scan-assembler-times

g++vblendvps[ \\t] 2

g++: g++.target/i386/pr100738-1.C  -std=gnu++20  scan-assembler-not

g++vpcmpeqd[ \\t]

g++: g++.target/i386/pr100738-1.C  -std=gnu++20  scan-assembler-not

g++vpxor[ \\t]

g++: g++.target/i386/pr100738-1.C  -std=gnu++20  scan-assembler-times

g++vblendvps[ \\t] 2

g++: g++.target/i386/pr100738-1.C  -std=gnu++98  scan-assembler-not

g++vpcmpeqd[ \\t]

g++: g++.target/i386/pr100738-1.C  -std=gnu++98  scan-assembler-not

g++vpxor[ \\t]

g++: g++.target/i386/pr100738-1.C  -std=gnu++98  scan-assembler-times

g++vblendvps[ \\t] 2

g++: g++.target/i386/pr103861-1.C  -std=gnu++14  scan-assembler-times

g++pcmpeqb 2

g++: g++.target/i386/pr103861-1.C  -std=gnu++17  scan-assembler-times

g++pcmpeqb 2

g++: g++.target/i386/pr103861-1.C  -std=gnu++20  scan-assembler-times

g++pcmpeqb 2

g++: g++.target/i386/pr103861-1.C  -std=gnu++98  scan-assembler-times

g++pcmpeqb 2

g++: g++.target/i386/pr61747.C  -std=gnu++14  scan-assembler-times max 4

g++: g++.target/i386/pr61747.C  -std=gnu++14  scan-assembler-times min 4

g++: g++.target/i386/pr61747.C  -std=gnu++17  scan-assembler-times max 4

g++: g++.target/i386/pr61747.C  -std=gnu++17  scan-assembler-times min 4

g++: g++.target/i386/pr61747.C  -std=gnu++20  scan-assembler-times max 4

g++: g++.target/i386/pr61747.C  -std=gnu++20  scan-assembler-times min 4

g++: g++.target/i386/sse4_1-pr54700-1.C   scan-assembler-not pcmpgt[bdq]

g++: g++.target/i386/sse4_1-pr54700-1.C   scan-assembler-times blendvpd 4

g++: g++.target/i386/sse4_1-pr54700-1.C   scan-assembler-times blendvps 4

g++: g++.target/i386/sse4_1-pr54700-1.C   scan-assembler-times pblendvb 2

gcc: gcc.target/i386/avx2-pr99908.c scan-assembler-not \tvpcmpeq

gcc: gcc.target/i386/avx512bw-pr96891-1.c scan-assembler-not %k[0-7]

gcc: gcc.target/i386/avx512vl-pr88547-1.c scan-assembler-not %k[0-9]

gcc: gcc.target/i386/avx512vl-pr88547-1.c scan-assembler-times vpminsb[\t ] 2

gcc: gcc.target/i386/avx512vl-pr88547-1.c scan-assembler-times vpminsd[\t ] 2

gcc: gcc.target/i386/avx512vl-pr88547-1.c scan-assembler-times vpminsq[\t ] 2

gcc: gcc.target/i386/avx512vl-pr88547-1.c scan-assembler-times vpminsw[\t ] 2

gcc: gcc.target/i386/avx512vl-pr88547-1.c scan-assembler-times vpminub[\t ] 2

gcc: gcc.target/i386/avx512vl-pr88547-1.c scan-assembler-times vpminud[\t ] 2

gcc: gcc.target/i386/avx512vl-pr88547-1.c scan-assembler-times vpminuq[\t ] 2

gcc: gcc.target/i386/avx512vl-pr88547-1.c scan-assembler-times vpminuw[\t ] 2

gcc: gcc.target/i386/blendv-3.c scan-assembler-not vpcmp

gcc: gcc.target/i386/pr88540.c scan-assembler minpd

gcc: gcc.target/i386/sse4_1-pr99908.c scan-assembler-not \tpcmpeq

unix/-m32: g++: g++.target/i386/avx-pr54700-1.C   scan-assembler-not
vpcmpgt[bdq]

unix/-m32: g++: g++.target/i386/avx-pr54700-1.C   scan-assembler-times
vblendvpd 4

unix/-m32: g++: g++.target/i386/avx-pr54700-1.C   scan-assembler-times
vblendvps 4

unix/-m32: g++: g++.target/i386/avx-pr54700-1.C   scan-assembler-times
vpblendvb 2

unix/-m32: g++: g++.target/i386/avx2-pr54700-1.C   scan-assembler-not
vpcmpgt[bdq]

unix/-m32: g++: g++.target/i386/avx2-pr54700-1.C
scan-assembler-times vblendvpd 4

unix/-m32: g++: g++.target/i386/avx2-pr54700-1.C
scan-assembler-times vblendvps 4

unix/-m32: g++: g++.target/i386/avx2-pr54700-1.C
scan-assembler-times vpblendvb 2

unix/-m32: g++: g++.target/i386/avx512fp16-vcondmn-minmax.C
-std=gnu++14  scan-assembler-times vmaxph 3

unix/-m32: g++: g++.target/i386/avx512fp16-vcondmn-minmax.C
-std=gnu++14  scan-assembler-times vminph 3

unix/-m32: g++: g++.target/i386/avx512fp16-vcondmn-minmax.C
-std=gnu++17  scan-assembler-times vmaxph 3

unix/-m32: g++: g++.target/i386/avx512fp16-vcondmn-minmax.C
-std=gnu++17  scan-assembler-times vminph 3

unix/-m32: g++: g++.target/i386/avx512fp16-vcondmn-minmax.C
-std=gnu++20  scan-assembler-times vmaxph 3

unix/-m32: g++: g++.target/i386/avx512fp16-vcondmn-minmax.C
-std=gnu++20  scan-assembler-times vminph 3

unix/-m32: g++: g++.target/i386/avx512fp16-vcondmn-minmax.C
-std=gnu++98  scan-assembler-times vmaxph 3

unix/-m32: g++: g++.target/i386/avx512fp16-vcondmn-minmax.C
-std=gnu++98  scan-assembler-times vminph 3

unix/-m32: g++: g++.target/i386/pr100637-1b.C  -std=gnu++14
scan-assembler-times pcmpeqb 2

unix/-m32: g++: g++.target/i386/pr100637-1b.C  -std=gnu++17
scan-assembler-times pcmpeqb 2

unix/-m32: g++: g++.target/i386/pr100637-1b.C  -std=gnu++20
scan-assembler-times pcmpeqb 2

unix/-m32: g++: g++.target/i386/pr100637-1b.C  -std=gnu++98
scan-assembler-times pcmpeqb 2

unix/-m32: g++: g++.target/i386/pr100637-1w.C  -std=gnu++14
scan-assembler-times pcmpeqw 2

unix/-m32: g++: g++.target/i386/pr100637-1w.C  -std=gnu++17
scan-assembler-times pcmpeqw 2

unix/-m32: g++: g++.target/i386/pr100637-1w.C  -std=gnu++20
scan-assembler-times pcmpeqw 2

unix/-m32: g++: g++.target/i386/pr100637-1w.C  -std=gnu++98
scan-assembler-times pcmpeqw 2

unix/-m32: g++: g++.target/i386/pr100738-1.C  -std=gnu++14
scan-assembler-not vpcmpeqd[ \\t]

unix/-m32: g++: g++.target/i386/pr100738-1.C  -std=gnu++14
scan-assembler-not vpxor[ \\t]

unix/-m32: g++: g++.target/i386/pr100738-1.C  -std=gnu++14
scan-assembler-times vblendvps[ \\t] 2

unix/-m32: g++: g++.target/i386/pr100738-1.C  -std=gnu++17
scan-assembler-not vpcmpeqd[ \\t]

unix/-m32: g++: g++.target/i386/pr100738-1.C  -std=gnu++17
scan-assembler-not vpxor[ \\t]

unix/-m32: g++: g++.target/i386/pr100738-1.C  -std=gnu++17
scan-assembler-times vblendvps[ \\t] 2

unix/-m32: g++: g++.target/i386/pr100738-1.C  -std=gnu++20
scan-assembler-not vpcmpeqd[ \\t]

unix/-m32: g++: g++.target/i386/pr100738-1.C  -std=gnu++20
scan-assembler-not vpxor[ \\t]

unix/-m32: g++: g++.target/i386/pr100738-1.C  -std=gnu++20
scan-assembler-times vblendvps[ \\t] 2

unix/-m32: g++: g++.target/i386/pr100738-1.C  -std=gnu++98
scan-assembler-not vpcmpeqd[ \\t]

unix/-m32: g++: g++.target/i386/pr100738-1.C  -std=gnu++98
scan-assembler-not vpxor[ \\t]

unix/-m32: g++: g++.target/i386/pr100738-1.C  -std=gnu++98
scan-assembler-times vblendvps[ \\t] 2

unix/-m32: g++: g++.target/i386/pr103861-1.C  -std=gnu++14
scan-assembler-times pcmpeqb 2

unix/-m32: g++: g++.target/i386/pr103861-1.C  -std=gnu++17
scan-assembler-times pcmpeqb 2

unix/-m32: g++: g++.target/i386/pr103861-1.C  -std=gnu++20
scan-assembler-times pcmpeqb 2

unix/-m32: g++: g++.target/i386/pr103861-1.C  -std=gnu++98
scan-assembler-times pcmpeqb 2

unix/-m32: g++: g++.target/i386/pr61747.C  -std=gnu++14
scan-assembler-times max 4

unix/-m32: g++: g++.target/i386/pr61747.C  -std=gnu++14
scan-assembler-times min 4

unix/-m32: g++: g++.target/i386/pr61747.C  -std=gnu++17
scan-assembler-times max 4

unix/-m32: g++: g++.target/i386/pr61747.C  -std=gnu++17
scan-assembler-times min 4

unix/-m32: g++: g++.target/i386/pr61747.C  -std=gnu++20
scan-assembler-times max 4

unix/-m32: g++: g++.target/i386/pr61747.C  -std=gnu++20
scan-assembler-times min 4

unix/-m32: g++: g++.target/i386/sse4_1-pr54700-1.C
scan-assembler-not pcmpgt[bdq]

unix/-m32: g++: g++.target/i386/sse4_1-pr54700-1.C
scan-assembler-times blendvpd 4

unix/-m32: g++: g++.target/i386/sse4_1-pr54700-1.C
scan-assembler-times blendvps 4

unix/-m32: g++: g++.target/i386/sse4_1-pr54700-1.C
scan-assembler-times pblendvb 2

unix/-m32: gcc: gcc.target/i386/avx2-pr99908.c scan-assembler-not \tvpcmpeq

unix/-m32: gcc: gcc.target/i386/avx512bw-pr96891-1.c scan-assembler-not %k[0-7]

unix/-m32: gcc: gcc.target/i386/avx512vl-pr88547-1.c scan-assembler-not %k[0-9]

unix/-m32: gcc: gcc.target/i386/avx512vl-pr88547-1.c
scan-assembler-times vpminsb[\t ] 2

unix/-m32: gcc: gcc.target/i386/avx512vl-pr88547-1.c
scan-assembler-times vpminsd[\t ] 2

unix/-m32: gcc: gcc.target/i386/avx512vl-pr88547-1.c
scan-assembler-times vpminsq[\t ] 2

unix/-m32: gcc: gcc.target/i386/avx512vl-pr88547-1.c
scan-assembler-times vpminsw[\t ] 2

unix/-m32: gcc: gcc.target/i386/avx512vl-pr88547-1.c
scan-assembler-times vpminub[\t ] 2

unix/-m32: gcc: gcc.target/i386/avx512vl-pr88547-1.c
scan-assembler-times vpminud[\t ] 2

unix/-m32: gcc: gcc.target/i386/avx512vl-pr88547-1.c
scan-assembler-times vpminuq[\t ] 2

unix/-m32: gcc: gcc.target/i386/avx512vl-pr88547-1.c
scan-assembler-times vpminuw[\t ] 2

unix/-m32: gcc: gcc.target/i386/blendv-3.c scan-assembler-not vpcmp

unix/-m32: gcc: gcc.target/i386/pr88540.c scan-assembler minpd

unix/-m32: gcc: gcc.target/i386/sse4_1-pr99908.c scan-assembler-not \tpcmpeq

>
> --
> BR,
> Hongtao
Kewen.Lin June 17, 2024, 6 a.m. UTC | #9
Hi Richi,

on 2024/6/14 18:31, Richard Biener wrote:
> The following retires vcond{,u,eq} optabs by stopping to use them
> from the middle-end.  Targets instead (should) implement vcond_mask
> and vec_cmp{,u,eq} optabs.  The PR this change refers to lists
> possibly affected targets - those implementing these patterns,
> and in particular it lists mips, sparc and ia64 as targets that
> most definitely will regress while others might simply remove
> their vcond{,u,eq} patterns.
> 
> I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
> I know riscv doesn't implement any of the legacy optabs.  But less
> maintained vector targets might need adjustments.

Thanks for making this change, this patch can be bootstrapped on ppc64{,le}
but both have one failure on gcc/testsuite/gcc.target/powerpc/pr66144-3.c,
by looking into it, I found it just exposed one oversight in the current
rs6000 vcond_mask support (the condition mask location is wrong), so I think
this change is fine for rs6000 port, I'll also test SPEC2017 for this (with
rs6000 vcond_mask change) soon.

BR,
Kewen

> 
> I want to get rid of those optabs for GCC 15.  If I don't hear from
> you I will assume your target is fine.
> 
> Thanks,
> Richard.
> 
> 	PR middle-end/114189
> 	* optabs-query.h (get_vcond_icode): Always return CODE_FOR_nothing.
> 	(get_vcond_eq_icode): Likewise.
> ---
>  gcc/optabs-query.h | 13 ++++---------
>  1 file changed, 4 insertions(+), 9 deletions(-)
> 
> diff --git a/gcc/optabs-query.h b/gcc/optabs-query.h
> index 0cb2c21ba85..31fbce80175 100644
> --- a/gcc/optabs-query.h
> +++ b/gcc/optabs-query.h
> @@ -112,14 +112,9 @@ get_vec_cmp_eq_icode (machine_mode vmode, machine_mode mask_mode)
>     mode CMODE, unsigned if UNS is true, resulting in a value of mode VMODE.  */
>  
>  inline enum insn_code
> -get_vcond_icode (machine_mode vmode, machine_mode cmode, bool uns)
> +get_vcond_icode (machine_mode, machine_mode, bool)
>  {
> -  enum insn_code icode = CODE_FOR_nothing;
> -  if (uns)
> -    icode = convert_optab_handler (vcondu_optab, vmode, cmode);
> -  else
> -    icode = convert_optab_handler (vcond_optab, vmode, cmode);
> -  return icode;
> +  return CODE_FOR_nothing;
>  }
>  
>  /* Return insn code for a conditional operator with a mask mode
> @@ -135,9 +130,9 @@ get_vcond_mask_icode (machine_mode vmode, machine_mode mmode)
>     mode CMODE (only EQ/NE), resulting in a value of mode VMODE.  */
>  
>  inline enum insn_code
> -get_vcond_eq_icode (machine_mode vmode, machine_mode cmode)
> +get_vcond_eq_icode (machine_mode, machine_mode)
>  {
> -  return convert_optab_handler (vcondeq_optab, vmode, cmode);
> +  return CODE_FOR_nothing;
>  }
>  
>  /* Enumerates the possible extraction_insn operations.  */
Richard Biener June 17, 2024, 6:16 a.m. UTC | #10
On Mon, 17 Jun 2024, Kewen.Lin wrote:

> Hi Richi,
> 
> on 2024/6/14 18:31, Richard Biener wrote:
> > The following retires vcond{,u,eq} optabs by stopping to use them
> > from the middle-end.  Targets instead (should) implement vcond_mask
> > and vec_cmp{,u,eq} optabs.  The PR this change refers to lists
> > possibly affected targets - those implementing these patterns,
> > and in particular it lists mips, sparc and ia64 as targets that
> > most definitely will regress while others might simply remove
> > their vcond{,u,eq} patterns.
> > 
> > I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
> > I know riscv doesn't implement any of the legacy optabs.  But less
> > maintained vector targets might need adjustments.
> 
> Thanks for making this change, this patch can be bootstrapped on ppc64{,le}
> but both have one failure on gcc/testsuite/gcc.target/powerpc/pr66144-3.c,
> by looking into it, I found it just exposed one oversight in the current
> rs6000 vcond_mask support (the condition mask location is wrong), so I think
> this change is fine for rs6000 port, I'll also test SPEC2017 for this (with
> rs6000 vcond_mask change) soon.

Btw, for those targets where the patch works out fine it would be nice
to delete their vcond{,u,eq} expanders (and double-check that doesn't
cause issues on its own).

Can target maintainers note whether their targets support all condition
codes for their vector comparisons (including FP variants)?  And 
whether they choose to implement all condition codes in vec_cmp
and adjust with inversion / operand swapping for not supported cases?

Thanks,
Richard.

> BR,
> Kewen
> 
> > 
> > I want to get rid of those optabs for GCC 15.  If I don't hear from
> > you I will assume your target is fine.
> > 
> > Thanks,
> > Richard.
> > 
> > 	PR middle-end/114189
> > 	* optabs-query.h (get_vcond_icode): Always return CODE_FOR_nothing.
> > 	(get_vcond_eq_icode): Likewise.
> > ---
> >  gcc/optabs-query.h | 13 ++++---------
> >  1 file changed, 4 insertions(+), 9 deletions(-)
> > 
> > diff --git a/gcc/optabs-query.h b/gcc/optabs-query.h
> > index 0cb2c21ba85..31fbce80175 100644
> > --- a/gcc/optabs-query.h
> > +++ b/gcc/optabs-query.h
> > @@ -112,14 +112,9 @@ get_vec_cmp_eq_icode (machine_mode vmode, machine_mode mask_mode)
> >     mode CMODE, unsigned if UNS is true, resulting in a value of mode VMODE.  */
> >  
> >  inline enum insn_code
> > -get_vcond_icode (machine_mode vmode, machine_mode cmode, bool uns)
> > +get_vcond_icode (machine_mode, machine_mode, bool)
> >  {
> > -  enum insn_code icode = CODE_FOR_nothing;
> > -  if (uns)
> > -    icode = convert_optab_handler (vcondu_optab, vmode, cmode);
> > -  else
> > -    icode = convert_optab_handler (vcond_optab, vmode, cmode);
> > -  return icode;
> > +  return CODE_FOR_nothing;
> >  }
> >  
> >  /* Return insn code for a conditional operator with a mask mode
> > @@ -135,9 +130,9 @@ get_vcond_mask_icode (machine_mode vmode, machine_mode mmode)
> >     mode CMODE (only EQ/NE), resulting in a value of mode VMODE.  */
> >  
> >  inline enum insn_code
> > -get_vcond_eq_icode (machine_mode vmode, machine_mode cmode)
> > +get_vcond_eq_icode (machine_mode, machine_mode)
> >  {
> > -  return convert_optab_handler (vcondeq_optab, vmode, cmode);
> > +  return CODE_FOR_nothing;
> >  }
> >  
> >  /* Enumerates the possible extraction_insn operations.  */
> 
>
Kewen.Lin June 17, 2024, 7:27 a.m. UTC | #11
on 2024/6/17 14:16, Richard Biener wrote:
> On Mon, 17 Jun 2024, Kewen.Lin wrote:
> 
>> Hi Richi,
>>
>> on 2024/6/14 18:31, Richard Biener wrote:
>>> The following retires vcond{,u,eq} optabs by stopping to use them
>>> from the middle-end.  Targets instead (should) implement vcond_mask
>>> and vec_cmp{,u,eq} optabs.  The PR this change refers to lists
>>> possibly affected targets - those implementing these patterns,
>>> and in particular it lists mips, sparc and ia64 as targets that
>>> most definitely will regress while others might simply remove
>>> their vcond{,u,eq} patterns.
>>>
>>> I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
>>> I know riscv doesn't implement any of the legacy optabs.  But less
>>> maintained vector targets might need adjustments.
>>
>> Thanks for making this change, this patch can be bootstrapped on ppc64{,le}
>> but both have one failure on gcc/testsuite/gcc.target/powerpc/pr66144-3.c,
>> by looking into it, I found it just exposed one oversight in the current
>> rs6000 vcond_mask support (the condition mask location is wrong), so I think
>> this change is fine for rs6000 port, I'll also test SPEC2017 for this (with
>> rs6000 vcond_mask change) soon.
> 
> Btw, for those targets where the patch works out fine it would be nice
> to delete their vcond{,u,eq} expanders (and double-check that doesn't
> cause issues on its own).

OK, will do, thanks for reminding!

> 
> Can target maintainers note whether their targets support all condition
> codes for their vector comparisons (including FP variants)?  And 

On Power, hardware only supports EQ and GT for vector INT (well ISA 3.0 supports
NE for b/h/w), while EQ, GT & GE for vector FP.  But vec_cmp optab supports
{EQ,NE,LT,LE,GT,GE} for signed, {EQ,NE,LTU,LEU,GTU,GEU} for unsigned, and
{EQ,NE,LT,LE,GT,GE,UNORDERED,ORDERED,UNEQ,LTGT,UNGE,UNGT,UNLT,UNLE} for fp.

> whether they choose to implement all condition codes in vec_cmp
> and adjust with inversion / operand swapping for not supported cases?

Yes for rs6000 port, some relies on define_insn_and_split.

BR,
Kewen
Richard Sandiford June 17, 2024, 9:13 a.m. UTC | #12
Richard Biener <rguenther@suse.de> writes:
> On Fri, 14 Jun 2024, Richard Biener wrote:
>
>> On Fri, 14 Jun 2024, Richard Sandiford wrote:
>> 
>> > Richard Biener <rguenther@suse.de> writes:
>> > > On Fri, 14 Jun 2024, Richard Sandiford wrote:
>> > >
>> > >> Richard Biener <rguenther@suse.de> writes:
>> > >> > The following retires vcond{,u,eq} optabs by stopping to use them
>> > >> > from the middle-end.  Targets instead (should) implement vcond_mask
>> > >> > and vec_cmp{,u,eq} optabs.  The PR this change refers to lists
>> > >> > possibly affected targets - those implementing these patterns,
>> > >> > and in particular it lists mips, sparc and ia64 as targets that
>> > >> > most definitely will regress while others might simply remove
>> > >> > their vcond{,u,eq} patterns.
>> > >> >
>> > >> > I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
>> > >> > I know riscv doesn't implement any of the legacy optabs.  But less
>> > >> > maintained vector targets might need adjustments.
>> > >> >
>> > >> > I want to get rid of those optabs for GCC 15.  If I don't hear from
>> > >> > you I will assume your target is fine.
>> > >> 
>> > >> Great!  Thanks for doing this.
>> > >> 
>> > >> Is there a plan for how we should handle vector comparisons that
>> > >> have to be done as the inverse of the negated condition?  Should
>> > >> targets simply not provide vec_cmp for such conditions and leave
>> > >> the target-independent code to deal with the fallout?  (For a
>> > >> standalone comparison, it would invert the result.  For a VEC_COND_EXPR
>> > >> it would swap the true and false values.)
>> > >
>> > > I would expect that the ISEL pass which currently deals with finding
>> > > valid combos of .VCMP{,U,EQ} and .VCOND_MASK deals with this.
>> > > So how do we deal with this right now?  I expect RTL expansion will
>> > > do the inverse trick, no?
>> > 
>> > I think in practice (at least for the targets I've worked on),
>> > the target's vec_cmp handles the inversion itself.  Thus the
>> > main optimisation done by targets' vcond patterns is to avoid
>> > the inversion (and instead swap the true/false values) when the
>> > "opposite" comparison is the native one.
>> 
>> I see.  I suppose whether or not vec_cmp is handled is determined
>> by a FAIL so it's somewhat difficult to determine this at ISEL time.

In principle we could say that the predicates should accept only the
conditions that can be done natively.  Then target-independent code
can apply the usual approaches to generating other conditions
(which tend to be replicated across targets anyway).

> I'll also note that we document vec_cmp{,u,eq} as having all zeros,
> all ones for the result while vcond_mask might only care for the MSB
> (it's documented to work on the result of a pre-computed vector
> comparison).

Not sure how much the docs reflect reality.  At least for SVE,
vec_cmp returns 0/1 results for vector boolean modes. 

But I think for integer comparison results, vec_cmp must produce 0/-1
and vcond only accepts 0/-1.

> So this eventually asks for targets to work out the optimal sequence
> via combine helpers and thus eventually splitters to fixup invalid
> compare operators late?

I really hope we can do this in late gimple & expand.

Thanks,
Richard
Andrew Stubbs June 17, 2024, 10:16 a.m. UTC | #13
On 14/06/2024 11:31, Richard Biener wrote:
> The following retires vcond{,u,eq} optabs by stopping to use them
> from the middle-end.  Targets instead (should) implement vcond_mask
> and vec_cmp{,u,eq} optabs.  The PR this change refers to lists
> possibly affected targets - those implementing these patterns,
> and in particular it lists mips, sparc and ia64 as targets that
> most definitely will regress while others might simply remove
> their vcond{,u,eq} patterns.
> 
> I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
> I know riscv doesn't implement any of the legacy optabs.  But less
> maintained vector targets might need adjustments.
> 
> I want to get rid of those optabs for GCC 15.  If I don't hear from
> you I will assume your target is fine.

Seems OK for GCN.

The GCN vcond patterns are expanded directly to vec_cmp/vcond_mask, so 
the set of supported operations should be identical.

Andrew
Stefan Schulze Frielinghaus June 17, 2024, 10:31 a.m. UTC | #14
On Mon, Jun 17, 2024 at 08:16:34AM +0200, Richard Biener wrote:
> On Mon, 17 Jun 2024, Kewen.Lin wrote:
> 
> > Hi Richi,
> > 
> > on 2024/6/14 18:31, Richard Biener wrote:
> > > The following retires vcond{,u,eq} optabs by stopping to use them
> > > from the middle-end.  Targets instead (should) implement vcond_mask
> > > and vec_cmp{,u,eq} optabs.  The PR this change refers to lists
> > > possibly affected targets - those implementing these patterns,
> > > and in particular it lists mips, sparc and ia64 as targets that
> > > most definitely will regress while others might simply remove
> > > their vcond{,u,eq} patterns.
> > > 
> > > I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
> > > I know riscv doesn't implement any of the legacy optabs.  But less
> > > maintained vector targets might need adjustments.
> > 
> > Thanks for making this change, this patch can be bootstrapped on ppc64{,le}
> > but both have one failure on gcc/testsuite/gcc.target/powerpc/pr66144-3.c,
> > by looking into it, I found it just exposed one oversight in the current
> > rs6000 vcond_mask support (the condition mask location is wrong), so I think
> > this change is fine for rs6000 port, I'll also test SPEC2017 for this (with
> > rs6000 vcond_mask change) soon.
> 
> Btw, for those targets where the patch works out fine it would be nice
> to delete their vcond{,u,eq} expanders (and double-check that doesn't
> cause issues on its own).
> 
> Can target maintainers note whether their targets support all condition
> codes for their vector comparisons (including FP variants)?  And 
> whether they choose to implement all condition codes in vec_cmp
> and adjust with inversion / operand swapping for not supported cases?

On s390 we support all comparison operations with inverse / operand
swapping via s390_expand_vec_compare.  However, we still have some
failures for which I opened PR115519.  Currently it is unclear to me
what precisely is missing and will have a further look.  vcond_mask
expander is also implemented for all modes.

Cheers,
Stefan

> 
> Thanks,
> Richard.
> 
> > BR,
> > Kewen
> > 
> > > 
> > > I want to get rid of those optabs for GCC 15.  If I don't hear from
> > > you I will assume your target is fine.
> > > 
> > > Thanks,
> > > Richard.
> > > 
> > > 	PR middle-end/114189
> > > 	* optabs-query.h (get_vcond_icode): Always return CODE_FOR_nothing.
> > > 	(get_vcond_eq_icode): Likewise.
> > > ---
> > >  gcc/optabs-query.h | 13 ++++---------
> > >  1 file changed, 4 insertions(+), 9 deletions(-)
> > > 
> > > diff --git a/gcc/optabs-query.h b/gcc/optabs-query.h
> > > index 0cb2c21ba85..31fbce80175 100644
> > > --- a/gcc/optabs-query.h
> > > +++ b/gcc/optabs-query.h
> > > @@ -112,14 +112,9 @@ get_vec_cmp_eq_icode (machine_mode vmode, machine_mode mask_mode)
> > >     mode CMODE, unsigned if UNS is true, resulting in a value of mode VMODE.  */
> > >  
> > >  inline enum insn_code
> > > -get_vcond_icode (machine_mode vmode, machine_mode cmode, bool uns)
> > > +get_vcond_icode (machine_mode, machine_mode, bool)
> > >  {
> > > -  enum insn_code icode = CODE_FOR_nothing;
> > > -  if (uns)
> > > -    icode = convert_optab_handler (vcondu_optab, vmode, cmode);
> > > -  else
> > > -    icode = convert_optab_handler (vcond_optab, vmode, cmode);
> > > -  return icode;
> > > +  return CODE_FOR_nothing;
> > >  }
> > >  
> > >  /* Return insn code for a conditional operator with a mask mode
> > > @@ -135,9 +130,9 @@ get_vcond_mask_icode (machine_mode vmode, machine_mode mmode)
> > >     mode CMODE (only EQ/NE), resulting in a value of mode VMODE.  */
> > >  
> > >  inline enum insn_code
> > > -get_vcond_eq_icode (machine_mode vmode, machine_mode cmode)
> > > +get_vcond_eq_icode (machine_mode, machine_mode)
> > >  {
> > > -  return convert_optab_handler (vcondeq_optab, vmode, cmode);
> > > +  return CODE_FOR_nothing;
> > >  }
> > >  
> > >  /* Enumerates the possible extraction_insn operations.  */
> > 
> > 
> 
> -- 
> Richard Biener <rguenther@suse.de>
> SUSE Software Solutions Germany GmbH,
> Frankenstrasse 146, 90461 Nuernberg, Germany;
> GF: Ivo Totev, Andrew McDonald, Werner Knoblich; (HRB 36809, AG Nuernberg)
Richard Biener June 18, 2024, 6:17 a.m. UTC | #15
On Mon, 17 Jun 2024, Richard Sandiford wrote:

> Richard Biener <rguenther@suse.de> writes:
> > On Fri, 14 Jun 2024, Richard Biener wrote:
> >
> >> On Fri, 14 Jun 2024, Richard Sandiford wrote:
> >> 
> >> > Richard Biener <rguenther@suse.de> writes:
> >> > > On Fri, 14 Jun 2024, Richard Sandiford wrote:
> >> > >
> >> > >> Richard Biener <rguenther@suse.de> writes:
> >> > >> > The following retires vcond{,u,eq} optabs by stopping to use them
> >> > >> > from the middle-end.  Targets instead (should) implement vcond_mask
> >> > >> > and vec_cmp{,u,eq} optabs.  The PR this change refers to lists
> >> > >> > possibly affected targets - those implementing these patterns,
> >> > >> > and in particular it lists mips, sparc and ia64 as targets that
> >> > >> > most definitely will regress while others might simply remove
> >> > >> > their vcond{,u,eq} patterns.
> >> > >> >
> >> > >> > I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
> >> > >> > I know riscv doesn't implement any of the legacy optabs.  But less
> >> > >> > maintained vector targets might need adjustments.
> >> > >> >
> >> > >> > I want to get rid of those optabs for GCC 15.  If I don't hear from
> >> > >> > you I will assume your target is fine.
> >> > >> 
> >> > >> Great!  Thanks for doing this.
> >> > >> 
> >> > >> Is there a plan for how we should handle vector comparisons that
> >> > >> have to be done as the inverse of the negated condition?  Should
> >> > >> targets simply not provide vec_cmp for such conditions and leave
> >> > >> the target-independent code to deal with the fallout?  (For a
> >> > >> standalone comparison, it would invert the result.  For a VEC_COND_EXPR
> >> > >> it would swap the true and false values.)
> >> > >
> >> > > I would expect that the ISEL pass which currently deals with finding
> >> > > valid combos of .VCMP{,U,EQ} and .VCOND_MASK deals with this.
> >> > > So how do we deal with this right now?  I expect RTL expansion will
> >> > > do the inverse trick, no?
> >> > 
> >> > I think in practice (at least for the targets I've worked on),
> >> > the target's vec_cmp handles the inversion itself.  Thus the
> >> > main optimisation done by targets' vcond patterns is to avoid
> >> > the inversion (and instead swap the true/false values) when the
> >> > "opposite" comparison is the native one.
> >> 
> >> I see.  I suppose whether or not vec_cmp is handled is determined
> >> by a FAIL so it's somewhat difficult to determine this at ISEL time.
> 
> In principle we could say that the predicates should accept only the
> conditions that can be done natively.  Then target-independent code
> can apply the usual approaches to generating other conditions
> (which tend to be replicated across targets anyway).

Ah yeah, I suppose that would work.  So we'd update the docs
to say predicates are required to reject not handled compares
and otherwise the expander may not FAIL?

I'll note that expand_vec_cmp_expr_p already looks at the insn
predicates, so adjusting vector lowering (and vectorization) to
emit only recognized compares (and requiring folding to keep it at that)
should be possible.

ISEL would then mainly need to learn the trick of swapping vector
cond arms on inverted masks.  OTOH folding should also do that.

Or do you suggest to allow all compares on GIMPLE and only fixup
during ISEL?  How do we handle vector lowering then?  Would it be
enough to require "any" condition code and thus we expect targets
to implement enough codes so all compares can be handled by
swapping/inversion?

> > I'll also note that we document vec_cmp{,u,eq} as having all zeros,
> > all ones for the result while vcond_mask might only care for the MSB
> > (it's documented to work on the result of a pre-computed vector
> > comparison).
> 
> Not sure how much the docs reflect reality.  At least for SVE,
> vec_cmp returns 0/1 results for vector boolean modes. 

Likewise for AVX512 though since all elements are 1 bit it's -1 as well.

> But I think for integer comparison results, vec_cmp must produce 0/-1
> and vcond only accepts 0/-1.

OK, so we adjust docs to constrain vector integer results but otherwise
state the result is only used as predicate/mask operand.

> > So this eventually asks for targets to work out the optimal sequence
> > via combine helpers and thus eventually splitters to fixup invalid
> > compare operators late?
> 
> I really hope we can do this in late gimple & expand.

Me as well.

Richard.

> Thanks,
> Richard
>
Richard Sandiford June 20, 2024, 6:38 p.m. UTC | #16
Richard Biener <rguenther@suse.de> writes:
> On Mon, 17 Jun 2024, Richard Sandiford wrote:
>
>> Richard Biener <rguenther@suse.de> writes:
>> > On Fri, 14 Jun 2024, Richard Biener wrote:
>> >
>> >> On Fri, 14 Jun 2024, Richard Sandiford wrote:
>> >> 
>> >> > Richard Biener <rguenther@suse.de> writes:
>> >> > > On Fri, 14 Jun 2024, Richard Sandiford wrote:
>> >> > >
>> >> > >> Richard Biener <rguenther@suse.de> writes:
>> >> > >> > The following retires vcond{,u,eq} optabs by stopping to use them
>> >> > >> > from the middle-end.  Targets instead (should) implement vcond_mask
>> >> > >> > and vec_cmp{,u,eq} optabs.  The PR this change refers to lists
>> >> > >> > possibly affected targets - those implementing these patterns,
>> >> > >> > and in particular it lists mips, sparc and ia64 as targets that
>> >> > >> > most definitely will regress while others might simply remove
>> >> > >> > their vcond{,u,eq} patterns.
>> >> > >> >
>> >> > >> > I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
>> >> > >> > I know riscv doesn't implement any of the legacy optabs.  But less
>> >> > >> > maintained vector targets might need adjustments.
>> >> > >> >
>> >> > >> > I want to get rid of those optabs for GCC 15.  If I don't hear from
>> >> > >> > you I will assume your target is fine.
>> >> > >> 
>> >> > >> Great!  Thanks for doing this.
>> >> > >> 
>> >> > >> Is there a plan for how we should handle vector comparisons that
>> >> > >> have to be done as the inverse of the negated condition?  Should
>> >> > >> targets simply not provide vec_cmp for such conditions and leave
>> >> > >> the target-independent code to deal with the fallout?  (For a
>> >> > >> standalone comparison, it would invert the result.  For a VEC_COND_EXPR
>> >> > >> it would swap the true and false values.)
>> >> > >
>> >> > > I would expect that the ISEL pass which currently deals with finding
>> >> > > valid combos of .VCMP{,U,EQ} and .VCOND_MASK deals with this.
>> >> > > So how do we deal with this right now?  I expect RTL expansion will
>> >> > > do the inverse trick, no?
>> >> > 
>> >> > I think in practice (at least for the targets I've worked on),
>> >> > the target's vec_cmp handles the inversion itself.  Thus the
>> >> > main optimisation done by targets' vcond patterns is to avoid
>> >> > the inversion (and instead swap the true/false values) when the
>> >> > "opposite" comparison is the native one.
>> >> 
>> >> I see.  I suppose whether or not vec_cmp is handled is determined
>> >> by a FAIL so it's somewhat difficult to determine this at ISEL time.
>> 
>> In principle we could say that the predicates should accept only the
>> conditions that can be done natively.  Then target-independent code
>> can apply the usual approaches to generating other conditions
>> (which tend to be replicated across targets anyway).
>
> Ah yeah, I suppose that would work.  So we'd update the docs
> to say predicates are required to reject not handled compares
> and otherwise the expander may not FAIL?
>
> I'll note that expand_vec_cmp_expr_p already looks at the insn
> predicates, so adjusting vector lowering (and vectorization) to
> emit only recognized compares (and requiring folding to keep it at that)
> should be possible.
>
> ISEL would then mainly need to learn the trick of swapping vector
> cond arms on inverted masks.  OTOH folding should also do that.

Yeah.

> Or do you suggest to allow all compares on GIMPLE and only fixup
> during ISEL?  How do we handle vector lowering then?  Would it be
> enough to require "any" condition code and thus we expect targets
> to implement enough codes so all compares can be handled by
> swapping/inversion?

I'm not sure TBH.  I can see the argument that "canonicalising"
conditions for the target could be either vector lowering or ISEL.

If a target can only do == or != natively, for instance (is any target
like that?), then I think it should be ok for the predicates to accept
only that condition.  Then the opposite != or == could be done using
vector lowering/ISEL, but ordered comparisons would need to be lowered
as though vec_cmp wasn't implemented at all.

Something similar probably applies to FP comparisons if the handling
of unordered comparisons is limited.

And if we do that, it might be easier for vector lowering to handle
everything itself, rather than try to predict what ISEL is going to do.

Thanks,
Richard
Richard Biener June 21, 2024, 9:53 a.m. UTC | #17
On Thu, 20 Jun 2024, Richard Sandiford wrote:

> Richard Biener <rguenther@suse.de> writes:
> > On Mon, 17 Jun 2024, Richard Sandiford wrote:
> >
> >> Richard Biener <rguenther@suse.de> writes:
> >> > On Fri, 14 Jun 2024, Richard Biener wrote:
> >> >
> >> >> On Fri, 14 Jun 2024, Richard Sandiford wrote:
> >> >> 
> >> >> > Richard Biener <rguenther@suse.de> writes:
> >> >> > > On Fri, 14 Jun 2024, Richard Sandiford wrote:
> >> >> > >
> >> >> > >> Richard Biener <rguenther@suse.de> writes:
> >> >> > >> > The following retires vcond{,u,eq} optabs by stopping to use them
> >> >> > >> > from the middle-end.  Targets instead (should) implement vcond_mask
> >> >> > >> > and vec_cmp{,u,eq} optabs.  The PR this change refers to lists
> >> >> > >> > possibly affected targets - those implementing these patterns,
> >> >> > >> > and in particular it lists mips, sparc and ia64 as targets that
> >> >> > >> > most definitely will regress while others might simply remove
> >> >> > >> > their vcond{,u,eq} patterns.
> >> >> > >> >
> >> >> > >> > I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
> >> >> > >> > I know riscv doesn't implement any of the legacy optabs.  But less
> >> >> > >> > maintained vector targets might need adjustments.
> >> >> > >> >
> >> >> > >> > I want to get rid of those optabs for GCC 15.  If I don't hear from
> >> >> > >> > you I will assume your target is fine.
> >> >> > >> 
> >> >> > >> Great!  Thanks for doing this.
> >> >> > >> 
> >> >> > >> Is there a plan for how we should handle vector comparisons that
> >> >> > >> have to be done as the inverse of the negated condition?  Should
> >> >> > >> targets simply not provide vec_cmp for such conditions and leave
> >> >> > >> the target-independent code to deal with the fallout?  (For a
> >> >> > >> standalone comparison, it would invert the result.  For a VEC_COND_EXPR
> >> >> > >> it would swap the true and false values.)
> >> >> > >
> >> >> > > I would expect that the ISEL pass which currently deals with finding
> >> >> > > valid combos of .VCMP{,U,EQ} and .VCOND_MASK deals with this.
> >> >> > > So how do we deal with this right now?  I expect RTL expansion will
> >> >> > > do the inverse trick, no?
> >> >> > 
> >> >> > I think in practice (at least for the targets I've worked on),
> >> >> > the target's vec_cmp handles the inversion itself.  Thus the
> >> >> > main optimisation done by targets' vcond patterns is to avoid
> >> >> > the inversion (and instead swap the true/false values) when the
> >> >> > "opposite" comparison is the native one.
> >> >> 
> >> >> I see.  I suppose whether or not vec_cmp is handled is determined
> >> >> by a FAIL so it's somewhat difficult to determine this at ISEL time.
> >> 
> >> In principle we could say that the predicates should accept only the
> >> conditions that can be done natively.  Then target-independent code
> >> can apply the usual approaches to generating other conditions
> >> (which tend to be replicated across targets anyway).
> >
> > Ah yeah, I suppose that would work.  So we'd update the docs
> > to say predicates are required to reject not handled compares
> > and otherwise the expander may not FAIL?
> >
> > I'll note that expand_vec_cmp_expr_p already looks at the insn
> > predicates, so adjusting vector lowering (and vectorization) to
> > emit only recognized compares (and requiring folding to keep it at that)
> > should be possible.
> >
> > ISEL would then mainly need to learn the trick of swapping vector
> > cond arms on inverted masks.  OTOH folding should also do that.
> 
> Yeah.
> 
> > Or do you suggest to allow all compares on GIMPLE and only fixup
> > during ISEL?  How do we handle vector lowering then?  Would it be
> > enough to require "any" condition code and thus we expect targets
> > to implement enough codes so all compares can be handled by
> > swapping/inversion?
> 
> I'm not sure TBH.  I can see the argument that "canonicalising"
> conditions for the target could be either vector lowering or ISEL.
> 
> If a target can only do == or != natively, for instance (is any target
> like that?), then I think it should be ok for the predicates to accept
> only that condition.  Then the opposite != or == could be done using
> vector lowering/ISEL, but ordered comparisons would need to be lowered
> as though vec_cmp wasn't implemented at all.
> 
> Something similar probably applies to FP comparisons if the handling
> of unordered comparisons is limited.
> 
> And if we do that, it might be easier for vector lowering to handle
> everything itself, rather than try to predict what ISEL is going to do.

I agree that as we have to handle completely unsupported cases in
vector lowering anyway it's reasonable to try to force only supported
ops after that.

Note that when targets stop to advertise not supported compares then
the vectorizer likely needs adjustments as well.  We can of course
put some common logic into the middle-end like making the
expand_vec_cmp_expr_p function take additional optional arguments
to indicate whether swapping operands or inverting the condition
code makes the comparison supported.

Do you know any target off head that restricts vec_cmp support via
predicates?  I'm not exactly sure how do implement such predicate
and having a patch for a target with some vector coverage would
be nice for implementing this.  I'd start with the vector lowering
bits obviously, I suspect there's quite some patterns in match.pd
that need guarding in case there's no existing target doing such
predication.

Richard.
Richard Sandiford June 21, 2024, 2:26 p.m. UTC | #18
Richard Biener <rguenther@suse.de> writes:
> On Thu, 20 Jun 2024, Richard Sandiford wrote:
>
>> Richard Biener <rguenther@suse.de> writes:
>> > On Mon, 17 Jun 2024, Richard Sandiford wrote:
>> >
>> >> Richard Biener <rguenther@suse.de> writes:
>> >> > On Fri, 14 Jun 2024, Richard Biener wrote:
>> >> >
>> >> >> On Fri, 14 Jun 2024, Richard Sandiford wrote:
>> >> >> 
>> >> >> > Richard Biener <rguenther@suse.de> writes:
>> >> >> > > On Fri, 14 Jun 2024, Richard Sandiford wrote:
>> >> >> > >
>> >> >> > >> Richard Biener <rguenther@suse.de> writes:
>> >> >> > >> > The following retires vcond{,u,eq} optabs by stopping to use them
>> >> >> > >> > from the middle-end.  Targets instead (should) implement vcond_mask
>> >> >> > >> > and vec_cmp{,u,eq} optabs.  The PR this change refers to lists
>> >> >> > >> > possibly affected targets - those implementing these patterns,
>> >> >> > >> > and in particular it lists mips, sparc and ia64 as targets that
>> >> >> > >> > most definitely will regress while others might simply remove
>> >> >> > >> > their vcond{,u,eq} patterns.
>> >> >> > >> >
>> >> >> > >> > I'd appreciate testing, I do not expect fallout for x86 or arm/aarch64.
>> >> >> > >> > I know riscv doesn't implement any of the legacy optabs.  But less
>> >> >> > >> > maintained vector targets might need adjustments.
>> >> >> > >> >
>> >> >> > >> > I want to get rid of those optabs for GCC 15.  If I don't hear from
>> >> >> > >> > you I will assume your target is fine.
>> >> >> > >> 
>> >> >> > >> Great!  Thanks for doing this.
>> >> >> > >> 
>> >> >> > >> Is there a plan for how we should handle vector comparisons that
>> >> >> > >> have to be done as the inverse of the negated condition?  Should
>> >> >> > >> targets simply not provide vec_cmp for such conditions and leave
>> >> >> > >> the target-independent code to deal with the fallout?  (For a
>> >> >> > >> standalone comparison, it would invert the result.  For a VEC_COND_EXPR
>> >> >> > >> it would swap the true and false values.)
>> >> >> > >
>> >> >> > > I would expect that the ISEL pass which currently deals with finding
>> >> >> > > valid combos of .VCMP{,U,EQ} and .VCOND_MASK deals with this.
>> >> >> > > So how do we deal with this right now?  I expect RTL expansion will
>> >> >> > > do the inverse trick, no?
>> >> >> > 
>> >> >> > I think in practice (at least for the targets I've worked on),
>> >> >> > the target's vec_cmp handles the inversion itself.  Thus the
>> >> >> > main optimisation done by targets' vcond patterns is to avoid
>> >> >> > the inversion (and instead swap the true/false values) when the
>> >> >> > "opposite" comparison is the native one.
>> >> >> 
>> >> >> I see.  I suppose whether or not vec_cmp is handled is determined
>> >> >> by a FAIL so it's somewhat difficult to determine this at ISEL time.
>> >> 
>> >> In principle we could say that the predicates should accept only the
>> >> conditions that can be done natively.  Then target-independent code
>> >> can apply the usual approaches to generating other conditions
>> >> (which tend to be replicated across targets anyway).
>> >
>> > Ah yeah, I suppose that would work.  So we'd update the docs
>> > to say predicates are required to reject not handled compares
>> > and otherwise the expander may not FAIL?
>> >
>> > I'll note that expand_vec_cmp_expr_p already looks at the insn
>> > predicates, so adjusting vector lowering (and vectorization) to
>> > emit only recognized compares (and requiring folding to keep it at that)
>> > should be possible.
>> >
>> > ISEL would then mainly need to learn the trick of swapping vector
>> > cond arms on inverted masks.  OTOH folding should also do that.
>> 
>> Yeah.
>> 
>> > Or do you suggest to allow all compares on GIMPLE and only fixup
>> > during ISEL?  How do we handle vector lowering then?  Would it be
>> > enough to require "any" condition code and thus we expect targets
>> > to implement enough codes so all compares can be handled by
>> > swapping/inversion?
>> 
>> I'm not sure TBH.  I can see the argument that "canonicalising"
>> conditions for the target could be either vector lowering or ISEL.
>> 
>> If a target can only do == or != natively, for instance (is any target
>> like that?), then I think it should be ok for the predicates to accept
>> only that condition.  Then the opposite != or == could be done using
>> vector lowering/ISEL, but ordered comparisons would need to be lowered
>> as though vec_cmp wasn't implemented at all.
>> 
>> Something similar probably applies to FP comparisons if the handling
>> of unordered comparisons is limited.
>> 
>> And if we do that, it might be easier for vector lowering to handle
>> everything itself, rather than try to predict what ISEL is going to do.
>
> I agree that as we have to handle completely unsupported cases in
> vector lowering anyway it's reasonable to try to force only supported
> ops after that.
>
> Note that when targets stop to advertise not supported compares then
> the vectorizer likely needs adjustments as well.  We can of course
> put some common logic into the middle-end like making the
> expand_vec_cmp_expr_p function take additional optional arguments
> to indicate whether swapping operands or inverting the condition
> code makes the comparison supported.
>
> Do you know any target off head that restricts vec_cmp support via
> predicates?  I'm not exactly sure how do implement such predicate
> and having a patch for a target with some vector coverage would
> be nice for implementing this.  I'd start with the vector lowering
> bits obviously, I suspect there's quite some patterns in match.pd
> that need guarding in case there's no existing target doing such
> predication.

I'm not aware of a target that already does it, since I imagine the
pessimisation would be unacceptable.  But here's what it might look like
for AArch64 Advanced SIMD:

diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
index 01b084d8ccb..a3ce504cddd 100644
--- a/gcc/config/aarch64/aarch64-simd.md
+++ b/gcc/config/aarch64/aarch64-simd.md
@@ -3844,9 +3844,12 @@ (define_expand "cbranch<mode>4"
 
 ;; Patterns comparing two vectors to produce a mask.
 
+(define_predicate "int_vec_cmp_operator"
+  (match_code "lt,ge,le,gt,ltu,geu,leu,gtu,eq"))
+
 (define_expand "vec_cmp<mode><mode>"
   [(set (match_operand:VSDQ_I_DI 0 "register_operand")
-	  (match_operator 1 "comparison_operator"
+	  (match_operator 1 "int_vec_cmp_operator"
 	    [(match_operand:VSDQ_I_DI 2 "register_operand")
 	     (match_operand:VSDQ_I_DI 3 "nonmemory_operand")]))]
   "TARGET_SIMD"
@@ -3924,9 +3927,12 @@ (define_expand "vec_cmp<mode><mode>"
   DONE;
 })
 
+(define_predicate "fp_vec_cmp_operator"
+  (match_code "lt,ge,le,gt,eq"))
+
 (define_expand "vec_cmp<mode><v_int_equiv>"
   [(set (match_operand:<V_INT_EQUIV> 0 "register_operand")
-	(match_operator 1 "comparison_operator"
+	(match_operator 1 "fp_vec_cmp_operator"
 	    [(match_operand:VDQF 2 "register_operand")
 	     (match_operand:VDQF 3 "nonmemory_operand")]))]
   "TARGET_SIMD"

The FP case is tricky to decide because we do implement UNORDERED,
but seem to do it in a way that could signal invalid for sNaNs.
For -fno-signaling-nans, the approach of doing two ==s and
combining them is target-independent.

Thanks,
Richard
diff mbox series

Patch

diff --git a/gcc/optabs-query.h b/gcc/optabs-query.h
index 0cb2c21ba85..31fbce80175 100644
--- a/gcc/optabs-query.h
+++ b/gcc/optabs-query.h
@@ -112,14 +112,9 @@  get_vec_cmp_eq_icode (machine_mode vmode, machine_mode mask_mode)
    mode CMODE, unsigned if UNS is true, resulting in a value of mode VMODE.  */
 
 inline enum insn_code
-get_vcond_icode (machine_mode vmode, machine_mode cmode, bool uns)
+get_vcond_icode (machine_mode, machine_mode, bool)
 {
-  enum insn_code icode = CODE_FOR_nothing;
-  if (uns)
-    icode = convert_optab_handler (vcondu_optab, vmode, cmode);
-  else
-    icode = convert_optab_handler (vcond_optab, vmode, cmode);
-  return icode;
+  return CODE_FOR_nothing;
 }
 
 /* Return insn code for a conditional operator with a mask mode
@@ -135,9 +130,9 @@  get_vcond_mask_icode (machine_mode vmode, machine_mode mmode)
    mode CMODE (only EQ/NE), resulting in a value of mode VMODE.  */
 
 inline enum insn_code
-get_vcond_eq_icode (machine_mode vmode, machine_mode cmode)
+get_vcond_eq_icode (machine_mode, machine_mode)
 {
-  return convert_optab_handler (vcondeq_optab, vmode, cmode);
+  return CODE_FOR_nothing;
 }
 
 /* Enumerates the possible extraction_insn operations.  */