diff mbox series

wide-int: Fix up mul_internal overflow checking [PR116224]

Message ID ZrHZmZjD5VQtpsL5@tucnak
State New
Headers show
Series wide-int: Fix up mul_internal overflow checking [PR116224] | expand

Commit Message

Jakub Jelinek Aug. 6, 2024, 8:06 a.m. UTC
Hi!

The following testcase is miscompiled, because wi::mul for (_BitInt(65))-15
times (_BitInt(65))-15 computes the right value (_BitInt(65))225, but
sets *overflow to wi::OVF_UNKNOWN as that it overflowed when it didn't.

Even signed operands are unpacked as unsigned but because they are
implicitly sign-extended from the represented value (the operands
obviously have len==1), we get
0xfffffff1, 0xffffffff, 0x1, 0x0
in both u and v (0x1 because that is exactly 65 bits).
We then multiply these.  Next step is because both the high and
overflow handling expects the high half to start at a limb boundary
the bits of the result starting with bit 65 are shifted up by 63 such
that the bits relevant for high/need_overflow start at the half of the
4th half wide int limb.
Because both operands are negative that part is then adjusted.

The reason mul_internal says there is overflow is because of the unspecified
garbage in the most significant bits of the result which the adjusting
doesn't clean up.  65 bit multiplication needs 65 bits of result and 65 bits
of the high part, can't produce more, so the following patch fixes it by
checking for the overflow only in those first 65 bits of the high part, not
anything beyond that.  If it was a highpart multiply, we'd have ignored that
as well (canonized).

Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?

2024-08-06  Jakub Jelinek  <jakub@redhat.com>

	PR tree-optimization/116224
	* wide-int.cc (wi::mul_internal): If prec isn't multiple of
	HOST_BITS_PER_WIDE_INT, for need_overflow checking only look at
	the least significant prec bits starting with r[half_blocks_needed].

	* gcc.dg/torture/bitint-72.c: New test.


	Jakub

Comments

Richard Biener Aug. 6, 2024, 8:11 a.m. UTC | #1
On Tue, 6 Aug 2024, Jakub Jelinek wrote:

> Hi!
> 
> The following testcase is miscompiled, because wi::mul for (_BitInt(65))-15
> times (_BitInt(65))-15 computes the right value (_BitInt(65))225, but
> sets *overflow to wi::OVF_UNKNOWN as that it overflowed when it didn't.
> 
> Even signed operands are unpacked as unsigned but because they are
> implicitly sign-extended from the represented value (the operands
> obviously have len==1), we get
> 0xfffffff1, 0xffffffff, 0x1, 0x0
> in both u and v (0x1 because that is exactly 65 bits).
> We then multiply these.  Next step is because both the high and
> overflow handling expects the high half to start at a limb boundary
> the bits of the result starting with bit 65 are shifted up by 63 such
> that the bits relevant for high/need_overflow start at the half of the
> 4th half wide int limb.
> Because both operands are negative that part is then adjusted.
> 
> The reason mul_internal says there is overflow is because of the unspecified
> garbage in the most significant bits of the result which the adjusting
> doesn't clean up.  65 bit multiplication needs 65 bits of result and 65 bits
> of the high part, can't produce more, so the following patch fixes it by
> checking for the overflow only in those first 65 bits of the high part, not
> anything beyond that.  If it was a highpart multiply, we'd have ignored that
> as well (canonized).
> 
> Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?

LGTM.

Richar.

> 2024-08-06  Jakub Jelinek  <jakub@redhat.com>
> 
> 	PR tree-optimization/116224
> 	* wide-int.cc (wi::mul_internal): If prec isn't multiple of
> 	HOST_BITS_PER_WIDE_INT, for need_overflow checking only look at
> 	the least significant prec bits starting with r[half_blocks_needed].
> 
> 	* gcc.dg/torture/bitint-72.c: New test.
> 
> --- gcc/wide-int.cc.jj	2024-02-24 12:45:28.701248718 +0100
> +++ gcc/wide-int.cc	2024-08-05 21:59:06.254441894 +0200
> @@ -1574,7 +1574,24 @@ wi::mul_internal (HOST_WIDE_INT *val, co
>  	  top &= mask;
>  	}
>  
> -      for (i = half_blocks_needed; i < half_blocks_needed * 2; i++)
> +      unsigned int end = half_blocks_needed * 2;
> +      shift = prec % HOST_BITS_PER_WIDE_INT;
> +      if (shift)
> +	{
> +	  /* For overflow checking only look at the first prec bits
> +	     starting with r[half_blocks_needed].  */
> +	  if (shift <= HOST_BITS_PER_HALF_WIDE_INT)
> +	    --end;
> +	  shift %= HOST_BITS_PER_HALF_WIDE_INT;
> +	  if (shift)
> +	    {
> +	      if (top)
> +		r[end - 1] |= ((~(unsigned HOST_HALF_WIDE_INT) 0) << shift);
> +	      else
> +		r[end - 1] &= (((unsigned HOST_HALF_WIDE_INT) 1) << shift) - 1;
> +	    }
> +	}
> +      for (i = half_blocks_needed; i < end; i++)
>  	if (((HOST_WIDE_INT)(r[i] & mask)) != top)
>  	  /* FIXME: Signed overflow type is not implemented yet.  */
>  	  *overflow = (sgn == UNSIGNED) ? wi::OVF_OVERFLOW : wi::OVF_UNKNOWN;
> --- gcc/testsuite/gcc.dg/torture/bitint-72.c.jj	2024-08-05 15:28:34.924687922 +0200
> +++ gcc/testsuite/gcc.dg/torture/bitint-72.c	2024-08-05 15:28:26.049805114 +0200
> @@ -0,0 +1,28 @@
> +/* PR tree-optimization/116224 */
> +/* { dg-do run { target bitint } } */
> +/* { dg-options "-std=c23" } */
> +/* { dg-skip-if "" { ! run_expensive_tests }  { "*" } { "-O0" "-O2" } } */
> +/* { dg-skip-if "" { ! run_expensive_tests } { "-flto" } { "" } } */
> +
> +#if __BITINT_MAXWIDTH__ >= 65
> +#define N 65
> +#else
> +#define N 63
> +#endif
> +
> +signed char g;
> +
> +int
> +foo (signed char c, int i, _BitInt(N) b)
> +{
> +  __builtin_memmove (&g, &b, 1);
> +  return b / i / c;
> +}
> +
> +int
> +main ()
> +{
> +  int x = foo (-15, -15, 900);
> +  if (x != 4)
> +    __builtin_abort ();
> +}
> 
> 	Jakub
> 
>
Sam James Aug. 6, 2024, 8:13 a.m. UTC | #2
Jakub Jelinek <jakub@redhat.com> writes:

> Hi!
>
> The following testcase is miscompiled, because wi::mul for (_BitInt(65))-15
> times (_BitInt(65))-15 computes the right value (_BitInt(65))225, but
> sets *overflow to wi::OVF_UNKNOWN as that it overflowed when it didn't.
>
> Even signed operands are unpacked as unsigned but because they are
> implicitly sign-extended from the represented value (the operands
> obviously have len==1), we get
> 0xfffffff1, 0xffffffff, 0x1, 0x0
> in both u and v (0x1 because that is exactly 65 bits).
> We then multiply these.  Next step is because both the high and
> overflow handling expects the high half to start at a limb boundary
> the bits of the result starting with bit 65 are shifted up by 63 such
> that the bits relevant for high/need_overflow start at the half of the
> 4th half wide int limb.
> Because both operands are negative that part is then adjusted.
>
> The reason mul_internal says there is overflow is because of the unspecified
> garbage in the most significant bits of the result which the adjusting
> doesn't clean up.  65 bit multiplication needs 65 bits of result and 65 bits
> of the high part, can't produce more, so the following patch fixes it by
> checking for the overflow only in those first 65 bits of the high part, not
> anything beyond that.  If it was a highpart multiply, we'd have ignored that
> as well (canonized).

Nit: canonicalized. to canonize is to become a saint :)

thanks,
sam
diff mbox series

Patch

--- gcc/wide-int.cc.jj	2024-02-24 12:45:28.701248718 +0100
+++ gcc/wide-int.cc	2024-08-05 21:59:06.254441894 +0200
@@ -1574,7 +1574,24 @@  wi::mul_internal (HOST_WIDE_INT *val, co
 	  top &= mask;
 	}
 
-      for (i = half_blocks_needed; i < half_blocks_needed * 2; i++)
+      unsigned int end = half_blocks_needed * 2;
+      shift = prec % HOST_BITS_PER_WIDE_INT;
+      if (shift)
+	{
+	  /* For overflow checking only look at the first prec bits
+	     starting with r[half_blocks_needed].  */
+	  if (shift <= HOST_BITS_PER_HALF_WIDE_INT)
+	    --end;
+	  shift %= HOST_BITS_PER_HALF_WIDE_INT;
+	  if (shift)
+	    {
+	      if (top)
+		r[end - 1] |= ((~(unsigned HOST_HALF_WIDE_INT) 0) << shift);
+	      else
+		r[end - 1] &= (((unsigned HOST_HALF_WIDE_INT) 1) << shift) - 1;
+	    }
+	}
+      for (i = half_blocks_needed; i < end; i++)
 	if (((HOST_WIDE_INT)(r[i] & mask)) != top)
 	  /* FIXME: Signed overflow type is not implemented yet.  */
 	  *overflow = (sgn == UNSIGNED) ? wi::OVF_OVERFLOW : wi::OVF_UNKNOWN;
--- gcc/testsuite/gcc.dg/torture/bitint-72.c.jj	2024-08-05 15:28:34.924687922 +0200
+++ gcc/testsuite/gcc.dg/torture/bitint-72.c	2024-08-05 15:28:26.049805114 +0200
@@ -0,0 +1,28 @@ 
+/* PR tree-optimization/116224 */
+/* { dg-do run { target bitint } } */
+/* { dg-options "-std=c23" } */
+/* { dg-skip-if "" { ! run_expensive_tests }  { "*" } { "-O0" "-O2" } } */
+/* { dg-skip-if "" { ! run_expensive_tests } { "-flto" } { "" } } */
+
+#if __BITINT_MAXWIDTH__ >= 65
+#define N 65
+#else
+#define N 63
+#endif
+
+signed char g;
+
+int
+foo (signed char c, int i, _BitInt(N) b)
+{
+  __builtin_memmove (&g, &b, 1);
+  return b / i / c;
+}
+
+int
+main ()
+{
+  int x = foo (-15, -15, 900);
+  if (x != 4)
+    __builtin_abort ();
+}