diff mbox series

PR tree-optimization/111922 - Ensure wi_fold arguments match precisions.

Message ID 96de63b4-649f-46e5-9e28-47fe9a65b948@redhat.com
State New
Headers show
Series PR tree-optimization/111922 - Ensure wi_fold arguments match precisions. | expand

Commit Message

Andrew MacLeod Nov. 24, 2023, 4:53 p.m. UTC
This problem here is that IPA is calling something like operator_minus 
with 2 operands, one with precision 32 (int) and one with precision 64 
(pointer). There are various ways this can happen as mentioned in the PR.

Regardless of whether IPA should be doing promoting types or not calling 
into range-ops,  range-ops does not support mis-matched precision in its 
arguments and it does not have to context to know what should be 
promoted/changed.   It is expected that the caller will ensure the 
operands are compatible.

However, It is not really practical for the caller to know this with 
more context. Some operations support different precision or even 
types.. ie, shifts, or casts, etc.    It seems silly to require IPA to 
have a big switch to see what the tree code is and match up/promote/or 
bail if operands don't match...

Range-ops routines probably shouldn't crash when this happens either, so 
this patch takes the conservative approach  and returns VARYING if there 
is a mismatch in the arguments precision.

Fixes the problem and bootstraps on x86_64-pc-linux-gnu with no new 
regressions.

OK for trunk?

Andrew

PS  If you would rather we trap in these cases and fix the callers, then 
I'd suggest we change these to checking_asserts instead.  I have also 
prepared a version that does a gcc_checking_assert instead of returning 
varying and done a bootstrap/testrun.    Of course, the callers will 
have to be changed..

It bootstraps fine in that variation too, and all the testcases (except  
this one of course) pass.   Its clearly not a common occurrence, and my 
inclination is to apply this patch so we silently move on and simply 
don't provide useful range info.. that is all the callers in these cases 
are likely to do anyway...

Comments

Richard Biener Nov. 27, 2023, 9:56 a.m. UTC | #1
On Fri, Nov 24, 2023 at 5:53 PM Andrew MacLeod <amacleod@redhat.com> wrote:
>
> This problem here is that IPA is calling something like operator_minus
> with 2 operands, one with precision 32 (int) and one with precision 64
> (pointer). There are various ways this can happen as mentioned in the PR.
>
> Regardless of whether IPA should be doing promoting types or not calling
> into range-ops,  range-ops does not support mis-matched precision in its
> arguments and it does not have to context to know what should be
> promoted/changed.   It is expected that the caller will ensure the
> operands are compatible.
>
> However, It is not really practical for the caller to know this with
> more context. Some operations support different precision or even
> types.. ie, shifts, or casts, etc.    It seems silly to require IPA to
> have a big switch to see what the tree code is and match up/promote/or
> bail if operands don't match...
>
> Range-ops routines probably shouldn't crash when this happens either, so
> this patch takes the conservative approach  and returns VARYING if there
> is a mismatch in the arguments precision.
>
> Fixes the problem and bootstraps on x86_64-pc-linux-gnu with no new
> regressions.
>
> OK for trunk?
>
> Andrew
>
> PS  If you would rather we trap in these cases and fix the callers, then
> I'd suggest we change these to checking_asserts instead.  I have also
> prepared a version that does a gcc_checking_assert instead of returning
> varying and done a bootstrap/testrun.    Of course, the callers will
> have to be changed..

Yes, I'd very much prefer that - otherwise we get hard to find missed
optimizations
when one botches the argument (types).

Richard.

>
> It bootstraps fine in that variation too, and all the testcases (except
> this one of course) pass.   Its clearly not a common occurrence, and my
> inclination is to apply this patch so we silently move on and simply
> don't provide useful range info.. that is all the callers in these cases
> are likely to do anyway...
>
>
>
>
diff mbox series

Patch

From f9cddb4cf931826f09197ed0fc2d6d64e6ccc3c3 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod <amacleod@redhat.com>
Date: Wed, 22 Nov 2023 17:24:42 -0500
Subject: [PATCH] Ensure wi_fold arguments match precisions.

Return VARYING if any of the required operands or types to wi_fold do
not match expected precisions.

	PR tree-optimization/111922
	gcc/
	* range-op.cc (operator_plus::wi_fold): Check that precisions of
	arguments and result type match.
	(operator_widen_plus_signed::wi_fold): Ditto.
	(operator_widen_plus_unsigned::wi_fold): Ditto.
	(operator_minus::wi_fold): Ditto.
	(operator_min::wi_fold): Ditto.
	(operator_max::wi_fold): Ditto.
	(operator_mult::wi_fold): Ditto.
	(operator_widen_mult_signed::wi_fold): Ditto.
	(operator_widen_mult_unsigned::wi_fold): Ditto.
	(operator_div::wi_fold): Ditto.
	(operator_lshift::wi_fold): Ditto.
	(operator_rshift::wi_fold): Ditto.
	(operator_bitwise_and::wi_fold): Ditto.
	(operator_bitwise_or::wi_fold): Ditto.
	(operator_bitwise_xor::wi_fold): Ditto.
	(operator_trunc_mod::wi_fold): Ditto.
	(operator_abs::wi_fold): Ditto.
	(operator_absu::wi_fold): Ditto.

	gcc/testsuite/
	* gcc.dg/pr111922.c: New.
---
 gcc/range-op.cc                 | 119 ++++++++++++++++++++++++++++++++
 gcc/testsuite/gcc.dg/pr111922.c |  29 ++++++++
 2 files changed, 148 insertions(+)
 create mode 100644 gcc/testsuite/gcc.dg/pr111922.c

diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 6137f2aeed3..ddb7339c075 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -1651,6 +1651,13 @@  operator_plus::wi_fold (irange &r, tree type,
 			const wide_int &lh_lb, const wide_int &lh_ub,
 			const wide_int &rh_lb, const wide_int &rh_ub) const
 {
+  // This operation requires all types to be the same precision.
+  if (lh_lb.get_precision () != rh_lb.get_precision ()
+      || lh_lb.get_precision () != TYPE_PRECISION (type))
+    {
+      r.set_varying (type);
+      return;
+    }
   wi::overflow_type ov_lb, ov_ub;
   signop s = TYPE_SIGN (type);
   wide_int new_lb = wi::add (lh_lb, rh_lb, s, &ov_lb);
@@ -1797,6 +1804,12 @@  operator_widen_plus_signed::wi_fold (irange &r, tree type,
 				     const wide_int &rh_lb,
 				     const wide_int &rh_ub) const
 {
+  // This operation requires both sides to be the same precision.
+  if (lh_lb.get_precision () != rh_lb.get_precision ())
+    {
+      r.set_varying (type);
+      return;
+    }
    wi::overflow_type ov_lb, ov_ub;
    signop s = TYPE_SIGN (type);
 
@@ -1830,6 +1843,12 @@  operator_widen_plus_unsigned::wi_fold (irange &r, tree type,
 				       const wide_int &rh_lb,
 				       const wide_int &rh_ub) const
 {
+  // This operation requires both sides to be the same precision.
+  if (lh_lb.get_precision () != rh_lb.get_precision ())
+    {
+      r.set_varying (type);
+      return;
+    }
    wi::overflow_type ov_lb, ov_ub;
    signop s = TYPE_SIGN (type);
 
@@ -1858,6 +1877,14 @@  operator_minus::wi_fold (irange &r, tree type,
 			 const wide_int &lh_lb, const wide_int &lh_ub,
 			 const wide_int &rh_lb, const wide_int &rh_ub) const
 {
+  // This operation requires all ranges and types to be the same precision.
+  if (lh_lb.get_precision () != rh_lb.get_precision ()
+      || lh_lb.get_precision () != TYPE_PRECISION (type))
+    {
+      r.set_varying (type);
+      return;
+    }
+
   wi::overflow_type ov_lb, ov_ub;
   signop s = TYPE_SIGN (type);
   wide_int new_lb = wi::sub (lh_lb, rh_ub, s, &ov_lb);
@@ -2008,6 +2035,13 @@  operator_min::wi_fold (irange &r, tree type,
 		       const wide_int &lh_lb, const wide_int &lh_ub,
 		       const wide_int &rh_lb, const wide_int &rh_ub) const
 {
+  // This operation requires all ranges and types to be the same precision.
+  if (lh_lb.get_precision () != rh_lb.get_precision ()
+      || lh_lb.get_precision () != TYPE_PRECISION (type))
+    {
+      r.set_varying (type);
+      return;
+    }
   signop s = TYPE_SIGN (type);
   wide_int new_lb = wi::min (lh_lb, rh_lb, s);
   wide_int new_ub = wi::min (lh_ub, rh_ub, s);
@@ -2027,6 +2061,13 @@  operator_max::wi_fold (irange &r, tree type,
 		       const wide_int &lh_lb, const wide_int &lh_ub,
 		       const wide_int &rh_lb, const wide_int &rh_ub) const
 {
+  // This operation requires all ranges and types to be the same precision.
+  if (lh_lb.get_precision () != rh_lb.get_precision ()
+      || lh_lb.get_precision () != TYPE_PRECISION (type))
+    {
+      r.set_varying (type);
+      return;
+    }
   signop s = TYPE_SIGN (type);
   wide_int new_lb = wi::max (lh_lb, rh_lb, s);
   wide_int new_ub = wi::max (lh_ub, rh_ub, s);
@@ -2149,6 +2190,13 @@  operator_mult::wi_fold (irange &r, tree type,
 			const wide_int &lh_lb, const wide_int &lh_ub,
 			const wide_int &rh_lb, const wide_int &rh_ub) const
 {
+  // This operation requires all ranges and types to be the same precision.
+  if (lh_lb.get_precision () != rh_lb.get_precision ()
+      || lh_lb.get_precision () != TYPE_PRECISION (type))
+    {
+      r.set_varying (type);
+      return;
+    }
   if (TYPE_OVERFLOW_UNDEFINED (type))
     {
       wi_cross_product (r, type, lh_lb, lh_ub, rh_lb, rh_ub);
@@ -2253,6 +2301,12 @@  operator_widen_mult_signed::wi_fold (irange &r, tree type,
 				     const wide_int &rh_lb,
 				     const wide_int &rh_ub) const
 {
+  // This operation requires both sides to be the same precision.
+  if (lh_lb.get_precision () != rh_lb.get_precision ())
+    {
+      r.set_varying (type);
+      return;
+    }
   signop s = TYPE_SIGN (type);
 
   wide_int lh_wlb = wide_int::from (lh_lb, wi::get_precision (lh_lb) * 2, SIGNED);
@@ -2285,6 +2339,12 @@  operator_widen_mult_unsigned::wi_fold (irange &r, tree type,
 				       const wide_int &rh_lb,
 				       const wide_int &rh_ub) const
 {
+  // This operation requires both sides to be the same precision.
+  if (lh_lb.get_precision () != rh_lb.get_precision ())
+    {
+      r.set_varying (type);
+      return;
+    }
   signop s = TYPE_SIGN (type);
 
   wide_int lh_wlb = wide_int::from (lh_lb, wi::get_precision (lh_lb) * 2, UNSIGNED);
@@ -2364,6 +2424,13 @@  operator_div::wi_fold (irange &r, tree type,
 		       const wide_int &lh_lb, const wide_int &lh_ub,
 		       const wide_int &rh_lb, const wide_int &rh_ub) const
 {
+  // This operation requires all ranges and types to be the same precision.
+  if (lh_lb.get_precision () != rh_lb.get_precision ()
+      || lh_lb.get_precision () != TYPE_PRECISION (type))
+    {
+      r.set_varying (type);
+      return;
+    }
   const wide_int dividend_min = lh_lb;
   const wide_int dividend_max = lh_ub;
   const wide_int divisor_min = rh_lb;
@@ -2555,6 +2622,12 @@  operator_lshift::wi_fold (irange &r, tree type,
 			  const wide_int &lh_lb, const wide_int &lh_ub,
 			  const wide_int &rh_lb, const wide_int &rh_ub) const
 {
+  // This operation requires left operand and type to be the same precision.
+  if (lh_lb.get_precision () != TYPE_PRECISION (type))
+    {
+      r.set_varying (type);
+      return;
+    }
   signop sign = TYPE_SIGN (type);
   unsigned prec = TYPE_PRECISION (type);
   int overflow_pos = sign == SIGNED ? prec - 1 : prec;
@@ -2807,6 +2880,12 @@  operator_rshift::wi_fold (irange &r, tree type,
 			  const wide_int &lh_lb, const wide_int &lh_ub,
 			  const wide_int &rh_lb, const wide_int &rh_ub) const
 {
+  // This operation requires left operand and type to be the same precision.
+  if (lh_lb.get_precision () != TYPE_PRECISION (type))
+    {
+      r.set_varying (type);
+      return;
+    }
   wi_cross_product (r, type, lh_lb, lh_ub, rh_lb, rh_ub);
 }
 
@@ -3319,6 +3398,13 @@  operator_bitwise_and::wi_fold (irange &r, tree type,
 			       const wide_int &rh_lb,
 			       const wide_int &rh_ub) const
 {
+  // This operation requires all ranges and types to be the same precision.
+  if (lh_lb.get_precision () != rh_lb.get_precision ()
+      || lh_lb.get_precision () != TYPE_PRECISION (type))
+    {
+      r.set_varying (type);
+      return;
+    }
   if (wi_optimize_and_or (r, BIT_AND_EXPR, type, lh_lb, lh_ub, rh_lb, rh_ub))
     return;
 
@@ -3629,6 +3715,13 @@  operator_bitwise_or::wi_fold (irange &r, tree type,
 			      const wide_int &rh_lb,
 			      const wide_int &rh_ub) const
 {
+  // This operation requires all ranges and types to be the same precision.
+  if (lh_lb.get_precision () != rh_lb.get_precision ()
+      || lh_lb.get_precision () != TYPE_PRECISION (type))
+    {
+      r.set_varying (type);
+      return;
+    }
   if (wi_optimize_and_or (r, BIT_IOR_EXPR, type, lh_lb, lh_ub, rh_lb, rh_ub))
     return;
 
@@ -3722,6 +3815,13 @@  operator_bitwise_xor::wi_fold (irange &r, tree type,
 			       const wide_int &rh_lb,
 			       const wide_int &rh_ub) const
 {
+  // This operation requires all ranges and types to be the same precision.
+  if (lh_lb.get_precision () != rh_lb.get_precision ()
+      || lh_lb.get_precision () != TYPE_PRECISION (type))
+    {
+      r.set_varying (type);
+      return;
+    }
   signop sign = TYPE_SIGN (type);
   wide_int maybe_nonzero_lh, mustbe_nonzero_lh;
   wide_int maybe_nonzero_rh, mustbe_nonzero_rh;
@@ -3865,6 +3965,13 @@  operator_trunc_mod::wi_fold (irange &r, tree type,
 			     const wide_int &rh_lb,
 			     const wide_int &rh_ub) const
 {
+  // This operation requires all ranges and types to be the same precision.
+  if (lh_lb.get_precision () != rh_lb.get_precision ()
+      || lh_lb.get_precision () != TYPE_PRECISION (type))
+    {
+      r.set_varying (type);
+      return;
+    }
   wide_int new_lb, new_ub, tmp;
   signop sign = TYPE_SIGN (type);
   unsigned prec = TYPE_PRECISION (type);
@@ -4151,6 +4258,12 @@  operator_abs::wi_fold (irange &r, tree type,
 		       const wide_int &rh_lb ATTRIBUTE_UNUSED,
 		       const wide_int &rh_ub ATTRIBUTE_UNUSED) const
 {
+  // This operation requires the operand and type to be the same precision.
+  if (lh_lb.get_precision () != TYPE_PRECISION (type))
+    {
+      r.set_varying (type);
+      return;
+    }
   wide_int min, max;
   signop sign = TYPE_SIGN (type);
   unsigned prec = TYPE_PRECISION (type);
@@ -4274,6 +4387,12 @@  operator_absu::wi_fold (irange &r, tree type,
 			const wide_int &rh_lb ATTRIBUTE_UNUSED,
 			const wide_int &rh_ub ATTRIBUTE_UNUSED) const
 {
+  // This operation requires the operand and type to be the same precision.
+  if (lh_lb.get_precision () != TYPE_PRECISION (type))
+    {
+      r.set_varying (type);
+      return;
+    }
   wide_int new_lb, new_ub;
 
   // Pass through VR0 the easy cases.
diff --git a/gcc/testsuite/gcc.dg/pr111922.c b/gcc/testsuite/gcc.dg/pr111922.c
new file mode 100644
index 00000000000..4f429d741c7
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr111922.c
@@ -0,0 +1,29 @@ 
+/* { dg-do compile } */
+/* { dg-options "-O2 -fno-tree-fre" } */
+
+void f2 (void);
+void f4 (int, int, int);
+struct A { int a; };
+struct B { struct A *b; int c; } v;
+
+static int
+f1 (x, y)
+  struct C *x;
+  struct A *y;
+{
+  (v.c = v.b->a) || (v.c = v.b->a);
+  f2 ();
+}
+
+static void
+f3 (int x, int y)
+{
+  int b = f1 (0, ~x);
+  f4 (0, 0, v.c);
+}
+
+void
+f5 (void)
+{
+  f3 (0, 0);
+}
-- 
2.41.0