Message ID | alpine.DEB.2.02.1305141316560.6987@stedding.saclay.inria.fr |
---|---|
State | New |
Headers | show |
On Tue, May 14, 2013 at 1:23 PM, Marc Glisse <marc.glisse@inria.fr> wrote: > Hello, > > in an earlier patch I apparently introduced a platform dependent signed / > unsigned comparison, so here is a fix. I am taking the chance to fix the > host_integerp second argument nearby: we want non-negative integers. > > Passes bootstrap+testsuite on x86_64-linux-gnu and apparently bootstrap on > m68k. > > 2013-05-13 Marc Glisse <marc.glisse@inria.fr> > > PR bootstrap/57266 > * fold-const.c (fold_binary_loc) <shift>: Use an unsigned > variable for the shift amount. Check that we shift by non-negative > amounts. > > -- > Marc Glisse > Index: gcc/fold-const.c > =================================================================== > --- gcc/fold-const.c (revision 198853) > +++ gcc/fold-const.c (working copy) > @@ -12416,27 +12416,27 @@ fold_binary_loc (location_t loc, > return fold_build2_loc (loc, code, type, op0, tem); > > /* Since negative shift count is not well-defined, > don't try to compute it in the compiler. */ > if (TREE_CODE (arg1) == INTEGER_CST && tree_int_cst_sgn (arg1) < 0) > return NULL_TREE; > > prec = element_precision (type); > > /* Turn (a OP c1) OP c2 into a OP (c1+c2). */ > - if (TREE_CODE (op0) == code && host_integerp (arg1, false) > + if (TREE_CODE (op0) == code && host_integerp (arg1, true) > && TREE_INT_CST_LOW (arg1) < prec > - && host_integerp (TREE_OPERAND (arg0, 1), false) > + && host_integerp (TREE_OPERAND (arg0, 1), true) > && TREE_INT_CST_LOW (TREE_OPERAND (arg0, 1)) < prec) That looks good, but ... > { > - HOST_WIDE_INT low = (TREE_INT_CST_LOW (TREE_OPERAND (arg0, 1)) > - + TREE_INT_CST_LOW (arg1)); > + unsigned int low = (TREE_INT_CST_LOW (TREE_OPERAND (arg0, 1)) > + + TREE_INT_CST_LOW (arg1)); that's wrong. TREE_INT_CST_LOW doesn't fit in an unsigned int. unsigned HOST_WIDE_INT would work though. Ok with that change. Thanks, Richard. > /* Deal with a OP (c1 + c2) being undefined but (a OP c1) OP c2 > being well defined. */ > if (low >= prec) > { > if (code == LROTATE_EXPR || code == RROTATE_EXPR) > low = low % prec; > else if (TYPE_UNSIGNED (type) || code == LSHIFT_EXPR) > return omit_one_operand_loc (loc, type, build_zero_cst > (type), > TREE_OPERAND (arg0, 0)); >
On Tue, 14 May 2013, Richard Biener wrote: > On Tue, May 14, 2013 at 1:23 PM, Marc Glisse <marc.glisse@inria.fr> wrote: >> Hello, >> >> in an earlier patch I apparently introduced a platform dependent signed / >> unsigned comparison, so here is a fix. I am taking the chance to fix the >> host_integerp second argument nearby: we want non-negative integers. >> >> Passes bootstrap+testsuite on x86_64-linux-gnu and apparently bootstrap on >> m68k. >> >> 2013-05-13 Marc Glisse <marc.glisse@inria.fr> >> >> PR bootstrap/57266 >> * fold-const.c (fold_binary_loc) <shift>: Use an unsigned >> variable for the shift amount. Check that we shift by non-negative >> amounts. >> >> -- >> Marc Glisse >> Index: gcc/fold-const.c >> =================================================================== >> --- gcc/fold-const.c (revision 198853) >> +++ gcc/fold-const.c (working copy) >> @@ -12416,27 +12416,27 @@ fold_binary_loc (location_t loc, >> return fold_build2_loc (loc, code, type, op0, tem); >> >> /* Since negative shift count is not well-defined, >> don't try to compute it in the compiler. */ >> if (TREE_CODE (arg1) == INTEGER_CST && tree_int_cst_sgn (arg1) < 0) >> return NULL_TREE; >> >> prec = element_precision (type); >> >> /* Turn (a OP c1) OP c2 into a OP (c1+c2). */ >> - if (TREE_CODE (op0) == code && host_integerp (arg1, false) >> + if (TREE_CODE (op0) == code && host_integerp (arg1, true) >> && TREE_INT_CST_LOW (arg1) < prec >> - && host_integerp (TREE_OPERAND (arg0, 1), false) >> + && host_integerp (TREE_OPERAND (arg0, 1), true) >> && TREE_INT_CST_LOW (TREE_OPERAND (arg0, 1)) < prec) > > That looks good, but ... > >> { >> - HOST_WIDE_INT low = (TREE_INT_CST_LOW (TREE_OPERAND (arg0, 1)) >> - + TREE_INT_CST_LOW (arg1)); >> + unsigned int low = (TREE_INT_CST_LOW (TREE_OPERAND (arg0, 1)) >> + + TREE_INT_CST_LOW (arg1)); > > that's wrong. TREE_INT_CST_LOW doesn't fit in an unsigned int. > unsigned HOST_WIDE_INT would work though. The checks above show that both TREE_INT_CST_LOW are smaller than prec (for which I already used an unsigned int, but an unsigned char might have been enough until Kenneth introduces int512_t). Do you still want the change? > Ok with that change. > > Thanks, > Richard. > >> /* Deal with a OP (c1 + c2) being undefined but (a OP c1) OP c2 >> being well defined. */ >> if (low >= prec) >> { >> if (code == LROTATE_EXPR || code == RROTATE_EXPR) >> low = low % prec; >> else if (TYPE_UNSIGNED (type) || code == LSHIFT_EXPR) >> return omit_one_operand_loc (loc, type, build_zero_cst >> (type), >> TREE_OPERAND (arg0, 0));
On Tue, May 14, 2013 at 1:34 PM, Marc Glisse <marc.glisse@inria.fr> wrote: > On Tue, 14 May 2013, Richard Biener wrote: > >> On Tue, May 14, 2013 at 1:23 PM, Marc Glisse <marc.glisse@inria.fr> wrote: >>> >>> Hello, >>> >>> in an earlier patch I apparently introduced a platform dependent signed / >>> unsigned comparison, so here is a fix. I am taking the chance to fix the >>> host_integerp second argument nearby: we want non-negative integers. >>> >>> Passes bootstrap+testsuite on x86_64-linux-gnu and apparently bootstrap >>> on >>> m68k. >>> >>> 2013-05-13 Marc Glisse <marc.glisse@inria.fr> >>> >>> PR bootstrap/57266 >>> * fold-const.c (fold_binary_loc) <shift>: Use an unsigned >>> variable for the shift amount. Check that we shift by >>> non-negative >>> amounts. >>> >>> -- >>> Marc Glisse >>> Index: gcc/fold-const.c >>> =================================================================== >>> --- gcc/fold-const.c (revision 198853) >>> +++ gcc/fold-const.c (working copy) >>> @@ -12416,27 +12416,27 @@ fold_binary_loc (location_t loc, >>> return fold_build2_loc (loc, code, type, op0, tem); >>> >>> /* Since negative shift count is not well-defined, >>> don't try to compute it in the compiler. */ >>> if (TREE_CODE (arg1) == INTEGER_CST && tree_int_cst_sgn (arg1) < >>> 0) >>> return NULL_TREE; >>> >>> prec = element_precision (type); >>> >>> /* Turn (a OP c1) OP c2 into a OP (c1+c2). */ >>> - if (TREE_CODE (op0) == code && host_integerp (arg1, false) >>> + if (TREE_CODE (op0) == code && host_integerp (arg1, true) >>> && TREE_INT_CST_LOW (arg1) < prec >>> - && host_integerp (TREE_OPERAND (arg0, 1), false) >>> + && host_integerp (TREE_OPERAND (arg0, 1), true) >>> && TREE_INT_CST_LOW (TREE_OPERAND (arg0, 1)) < prec) >> >> >> That looks good, but ... >> >>> { >>> - HOST_WIDE_INT low = (TREE_INT_CST_LOW (TREE_OPERAND (arg0, 1)) >>> - + TREE_INT_CST_LOW (arg1)); >>> + unsigned int low = (TREE_INT_CST_LOW (TREE_OPERAND (arg0, 1)) >>> + + TREE_INT_CST_LOW (arg1)); >> >> >> that's wrong. TREE_INT_CST_LOW doesn't fit in an unsigned int. >> unsigned HOST_WIDE_INT would work though. > > > The checks above show that both TREE_INT_CST_LOW are smaller than prec (for > which I already used an unsigned int, but an unsigned char might have been > enough until Kenneth introduces int512_t). Do you still want the change? Ah, ok. No, fine without the change. Thanks, Richard. > >> Ok with that change. >> >> Thanks, >> Richard. >> >>> /* Deal with a OP (c1 + c2) being undefined but (a OP c1) OP c2 >>> being well defined. */ >>> if (low >= prec) >>> { >>> if (code == LROTATE_EXPR || code == RROTATE_EXPR) >>> low = low % prec; >>> else if (TYPE_UNSIGNED (type) || code == LSHIFT_EXPR) >>> return omit_one_operand_loc (loc, type, build_zero_cst >>> (type), >>> TREE_OPERAND (arg0, 0)); > > > -- > Marc Glisse
Index: gcc/fold-const.c =================================================================== --- gcc/fold-const.c (revision 198853) +++ gcc/fold-const.c (working copy) @@ -12416,27 +12416,27 @@ fold_binary_loc (location_t loc, return fold_build2_loc (loc, code, type, op0, tem); /* Since negative shift count is not well-defined, don't try to compute it in the compiler. */ if (TREE_CODE (arg1) == INTEGER_CST && tree_int_cst_sgn (arg1) < 0) return NULL_TREE; prec = element_precision (type); /* Turn (a OP c1) OP c2 into a OP (c1+c2). */ - if (TREE_CODE (op0) == code && host_integerp (arg1, false) + if (TREE_CODE (op0) == code && host_integerp (arg1, true) && TREE_INT_CST_LOW (arg1) < prec - && host_integerp (TREE_OPERAND (arg0, 1), false) + && host_integerp (TREE_OPERAND (arg0, 1), true) && TREE_INT_CST_LOW (TREE_OPERAND (arg0, 1)) < prec) { - HOST_WIDE_INT low = (TREE_INT_CST_LOW (TREE_OPERAND (arg0, 1)) - + TREE_INT_CST_LOW (arg1)); + unsigned int low = (TREE_INT_CST_LOW (TREE_OPERAND (arg0, 1)) + + TREE_INT_CST_LOW (arg1)); /* Deal with a OP (c1 + c2) being undefined but (a OP c1) OP c2 being well defined. */ if (low >= prec) { if (code == LROTATE_EXPR || code == RROTATE_EXPR) low = low % prec; else if (TYPE_UNSIGNED (type) || code == LSHIFT_EXPR) return omit_one_operand_loc (loc, type, build_zero_cst (type), TREE_OPERAND (arg0, 0));