Message ID | 87io5yxbju.fsf@e105548-lin.cambridge.arm.com |
---|---|
State | New |
Headers | show |
On Fri, Oct 23, 2015 at 9:56 AM, Richard Sandiford <richard.sandiford@arm.com> wrote: > Richard Biener <richard.guenther@gmail.com> writes: >>> @@ -12963,11 +12959,11 @@ tree_single_nonnegative_warnv_p (tree t, >> bool *strict_overflow_p, int depth) >>> If this code misses important cases that unbounded recursion >>> would not, passes that need this information could be revised >>> to provide it through dataflow propagation. */ >>> - if (depth < PARAM_VALUE (PARAM_MAX_SSA_NAME_QUERY_DEPTH)) >>> - return gimple_stmt_nonnegative_warnv_p (SSA_NAME_DEF_STMT (t), >>> - strict_overflow_p, depth); >>> + return (!name_registered_for_update_p (t) >>> + && depth < PARAM_VALUE (PARAM_MAX_SSA_NAME_QUERY_DEPTH) >>> + && gimple_stmt_nonnegative_warnv_p (SSA_NAME_DEF_STMT (t), >>> + strict_overflow_p, depth)); >>> >> >> I did it the above way because the ICE fixed segfaulted at the access >> to TREE_TYPE (t). But presumably this was already one level too deep >> into the chain. > > Ah, yeah, sounds plausible. > >>> +(for fns (TRUNC FLOOR CEIL ROUND NEARBYINT RINT) >>> + /* f(f(x)) -> f(x). */ >>> + (simplify >>> + (fns (fns @0)) >>> + (fns @0)) >>> + /* f(x) -> x if x is integer valued and f does nothing for such values. */ >>> + (if (!flag_errno_math) >> >> I wonder about this - only rint needs flag_errno_math protection I think, but >> even then how can an error possibly occur for a know integer valued real? > > We need the check because integer_valued_real_p allows +Inf, -Inf and +NaN > as integers(!) I've added a comment above the rule to say that. > > I agree that only rint really needs this -- fixed. > >>> @@ -2504,6 +2523,57 @@ along with GCC; see the file COPYING3. If not see >>> (mult (exps@1 (realpart @0)) (realpart (cexpis:type@2 (imagpart @0)))) >>> (mult @1 (imagpart @2))))))) >>> >>> +(if (canonicalize_math_p ()) >>> + /* floor(x) -> trunc(x) if x is nonnegative. */ >> >> I think this is not only a canonicalization. > > Why? It's not obviously an optimisation. There's no comment to explain > why the rule was there, but I assume it was to promote reuse in cases > where trunc and floor behave the same. It's there because most (all?) targets have a 'trunc' optab but not a floor one. > IMO passes like sincos should be allowed to make the final decision here, > based on what the target supports (and potentially on global information). Ok, fair enough. >>> + (for floors (FLOOR) >>> + truncs (TRUNC) >>> + (simplify >>> + (floors tree_expr_nonnegative_p@0) >>> + (truncs @0)))) >>> + >>> +(match double_value_p >>> + @0 >>> + (if (TYPE_MAIN_VARIANT (TREE_TYPE (@0)) == double_type_node))) >>> +(for froms (BUILT_IN_TRUNCL >>> + BUILT_IN_FLOORL >>> + BUILT_IN_CEILL >>> + BUILT_IN_ROUNDL >>> + BUILT_IN_NEARBYINTL >>> + BUILT_IN_RINTL) >>> + tos (BUILT_IN_TRUNC >>> + BUILT_IN_FLOOR >>> + BUILT_IN_CEIL >>> + BUILT_IN_ROUND >>> + BUILT_IN_NEARBYINT >>> + BUILT_IN_RINT) >>> + /* froms(extend(x)) -> extend(tos(x)). */ >>> + (if (optimize && canonicalize_math_p ()) >>> + (simplify >>> + (froms (convert double_value_p@0)) >>> + (convert (tos @0))))) >>> + >>> +(match float_value_p >>> + @0 >>> + (if (TYPE_MAIN_VARIANT (TREE_TYPE (@0)) == float_type_node))) >>> +(for froms (BUILT_IN_TRUNCL BUILT_IN_TRUNC >>> + BUILT_IN_FLOORL BUILT_IN_FLOOR >>> + BUILT_IN_CEILL BUILT_IN_CEIL >>> + BUILT_IN_ROUNDL BUILT_IN_ROUND >>> + BUILT_IN_NEARBYINTL BUILT_IN_NEARBYINT >>> + BUILT_IN_RINTL BUILT_IN_RINT) >>> + tos (BUILT_IN_TRUNCF BUILT_IN_TRUNCF >>> + BUILT_IN_FLOORF BUILT_IN_FLOORF >>> + BUILT_IN_CEILF BUILT_IN_CEILF >>> + BUILT_IN_ROUNDF BUILT_IN_ROUNDF >>> + BUILT_IN_NEARBYINTF BUILT_IN_NEARBYINTF >>> + BUILT_IN_RINTF BUILT_IN_RINTF) >>> + /* froms(extend(x)) -> extend(tos(x)). */ >>> + (if (optimize && canonicalize_math_p ()) >>> + (simplify >>> + (froms (convert float_value_p@0)) >>> + (convert (tos @0))))) >> >> I think we more generally do this kind of transforms (for more >> functions, that is). I think somewhere >> in either fold-const.c or convert.c or frontend code ... > > Ah, yeah, thanks for the pointer. I've removed that too. > >> I also think this shouldn't be canonicalize_math_p () restricted. > > Here again I think we should let sincos make the final decision about > what functions to use, based on target support. The rule I'd used in > the original canonicalize_math_p patch was: > > /* Simplification of math builtins. These rules must all be optimizations > as well as IL simplifications. If there is a possibility that the new > form could be a pessimization, the rule should go in the canonicalization > section that follows this one. > > And this could be a pessimisation. E.g. converting trunc((double)f) > to (double)truncf(f) could be bad if there are other uses of the > (double)f extension. There's also no guarantee that truncf is faster > than trunc. Ok. > How about this version? Tested on x86_64-linux-gnu, aarch64-linux-gnu > and arm-linux-gnueabi. Ok. Thanks, Richard. > Thanks, > Richard > > > gcc/ > * builtins.c (integer_valued_real_p): Move to fold-const.c. > (fold_trunc_transparent_mathfn, fold_builtin_trunc, fold_builtin_floor) > (fold_builtin_ceil, fold_builtin_round): Delete. > (fold_builtin_1): Handle constant trunc, floor, ceil and round > arguments here. > * convert.c (convert_to_real): Remove narrowing of rounding > functions. > * fold-const.h (integer_valued_real_unary_p) > (integer_valued_real_binary_p, integer_valued_real_call_p) > (integer_valued_real_single_p, integer_valued_real_p): Declare. > * fold-const.c (tree_single_nonnegative_warnv_p): Move > name_registered_for_update_p check to SSA_NAME case statement. > Don't call tree_simple_nonnegative_warnv_p for SSA names. > (integer_valued_real_unary_p, integer_valued_real_binary_p) > (integer_valued_real_call_p, integer_valued_real_single_p) > (integer_valued_real_invalid_p): New functions. > (integer_valued_real_p): Move from fold-const.c and rework > to call the functions above. Handle SSA names. > * gimple-fold.h (gimple_stmt_integer_valued_real_p): Declare. > * gimple-fold.c (gimple_assign_integer_valued_real_p) > (gimple_call_integer_valued_real_p, gimple_phi_integer_valued_real_p) > (gimple_stmt_integer_valued_real_p): New functions. > * match.pd: Fold f(f(x))->f(x) for fp->fp rounding functions f. > Fold f(x)->x for the same f if x is known to be integer-valued. > Fold f(extend(x))->extend(f'(x)) if doing so doesn't affect > the result. Canonicalize floor(x) as trunc(x) if x is > nonnegative. > > gcc/testsuite/ > * gcc.c-torture/execute/20030125-1.c (floor, floorf, sin, sinf): > Make weak rather than noinline. > * gcc.dg/builtins-57.c: Compile with -O. > * gcc.dg/torture/builtin-integral-1.c: Skip for -O0. > > diff --git a/gcc/builtins.c b/gcc/builtins.c > index e5e65ba..c70bbfd 100644 > --- a/gcc/builtins.c > +++ b/gcc/builtins.c > @@ -154,16 +154,10 @@ static tree fold_builtin_inf (location_t, tree, int); > static tree fold_builtin_nan (tree, tree, int); > static tree rewrite_call_expr (location_t, tree, int, tree, int, ...); > static bool validate_arg (const_tree, enum tree_code code); > -static bool integer_valued_real_p (tree); > -static tree fold_trunc_transparent_mathfn (location_t, tree, tree); > static rtx expand_builtin_fabs (tree, rtx, rtx); > static rtx expand_builtin_signbit (tree, rtx); > static tree fold_builtin_pow (location_t, tree, tree, tree, tree); > static tree fold_builtin_powi (location_t, tree, tree, tree, tree); > -static tree fold_builtin_trunc (location_t, tree, tree); > -static tree fold_builtin_floor (location_t, tree, tree); > -static tree fold_builtin_ceil (location_t, tree, tree); > -static tree fold_builtin_round (location_t, tree, tree); > static tree fold_builtin_int_roundingfn (location_t, tree, tree); > static tree fold_builtin_bitop (tree, tree); > static tree fold_builtin_strchr (location_t, tree, tree, tree); > @@ -7320,117 +7314,6 @@ fold_builtin_nan (tree arg, tree type, int quiet) > return build_real (type, real); > } > > -/* Return true if the floating point expression T has an integer value. > - We also allow +Inf, -Inf and NaN to be considered integer values. */ > - > -static bool > -integer_valued_real_p (tree t) > -{ > - switch (TREE_CODE (t)) > - { > - case FLOAT_EXPR: > - return true; > - > - case ABS_EXPR: > - case SAVE_EXPR: > - return integer_valued_real_p (TREE_OPERAND (t, 0)); > - > - case COMPOUND_EXPR: > - case MODIFY_EXPR: > - case BIND_EXPR: > - return integer_valued_real_p (TREE_OPERAND (t, 1)); > - > - case PLUS_EXPR: > - case MINUS_EXPR: > - case MULT_EXPR: > - case MIN_EXPR: > - case MAX_EXPR: > - return integer_valued_real_p (TREE_OPERAND (t, 0)) > - && integer_valued_real_p (TREE_OPERAND (t, 1)); > - > - case COND_EXPR: > - return integer_valued_real_p (TREE_OPERAND (t, 1)) > - && integer_valued_real_p (TREE_OPERAND (t, 2)); > - > - case REAL_CST: > - return real_isinteger (TREE_REAL_CST_PTR (t), TYPE_MODE (TREE_TYPE (t))); > - > - CASE_CONVERT: > - { > - tree type = TREE_TYPE (TREE_OPERAND (t, 0)); > - if (TREE_CODE (type) == INTEGER_TYPE) > - return true; > - if (TREE_CODE (type) == REAL_TYPE) > - return integer_valued_real_p (TREE_OPERAND (t, 0)); > - break; > - } > - > - case CALL_EXPR: > - switch (builtin_mathfn_code (t)) > - { > - CASE_FLT_FN (BUILT_IN_CEIL): > - CASE_FLT_FN (BUILT_IN_FLOOR): > - CASE_FLT_FN (BUILT_IN_NEARBYINT): > - CASE_FLT_FN (BUILT_IN_RINT): > - CASE_FLT_FN (BUILT_IN_ROUND): > - CASE_FLT_FN (BUILT_IN_TRUNC): > - return true; > - > - CASE_FLT_FN (BUILT_IN_FMIN): > - CASE_FLT_FN (BUILT_IN_FMAX): > - return integer_valued_real_p (CALL_EXPR_ARG (t, 0)) > - && integer_valued_real_p (CALL_EXPR_ARG (t, 1)); > - > - default: > - break; > - } > - break; > - > - default: > - break; > - } > - return false; > -} > - > -/* FNDECL is assumed to be a builtin where truncation can be propagated > - across (for instance floor((double)f) == (double)floorf (f). > - Do the transformation for a call with argument ARG. */ > - > -static tree > -fold_trunc_transparent_mathfn (location_t loc, tree fndecl, tree arg) > -{ > - enum built_in_function fcode = DECL_FUNCTION_CODE (fndecl); > - > - if (!validate_arg (arg, REAL_TYPE)) > - return NULL_TREE; > - > - /* Integer rounding functions are idempotent. */ > - if (fcode == builtin_mathfn_code (arg)) > - return arg; > - > - /* If argument is already integer valued, and we don't need to worry > - about setting errno, there's no need to perform rounding. */ > - if (! flag_errno_math && integer_valued_real_p (arg)) > - return arg; > - > - if (optimize) > - { > - tree arg0 = strip_float_extensions (arg); > - tree ftype = TREE_TYPE (TREE_TYPE (fndecl)); > - tree newtype = TREE_TYPE (arg0); > - tree decl; > - > - if (TYPE_PRECISION (newtype) < TYPE_PRECISION (ftype) > - && (decl = mathfn_built_in (newtype, fcode))) > - return fold_convert_loc (loc, ftype, > - build_call_expr_loc (loc, decl, 1, > - fold_convert_loc (loc, > - newtype, > - arg0))); > - } > - return NULL_TREE; > -} > - > /* FNDECL is assumed to be builtin which can narrow the FP type of > the argument, for instance lround((double)f) -> lroundf (f). > Do the transformation for a call with argument ARG. */ > @@ -7577,121 +7460,6 @@ fold_builtin_sincos (location_t loc, > build1 (REALPART_EXPR, type, call))); > } > > -/* Fold function call to builtin trunc, truncf or truncl with argument ARG. > - Return NULL_TREE if no simplification can be made. */ > - > -static tree > -fold_builtin_trunc (location_t loc, tree fndecl, tree arg) > -{ > - if (!validate_arg (arg, REAL_TYPE)) > - return NULL_TREE; > - > - /* Optimize trunc of constant value. */ > - if (TREE_CODE (arg) == REAL_CST && !TREE_OVERFLOW (arg)) > - { > - REAL_VALUE_TYPE r, x; > - tree type = TREE_TYPE (TREE_TYPE (fndecl)); > - > - x = TREE_REAL_CST (arg); > - real_trunc (&r, TYPE_MODE (type), &x); > - return build_real (type, r); > - } > - > - return fold_trunc_transparent_mathfn (loc, fndecl, arg); > -} > - > -/* Fold function call to builtin floor, floorf or floorl with argument ARG. > - Return NULL_TREE if no simplification can be made. */ > - > -static tree > -fold_builtin_floor (location_t loc, tree fndecl, tree arg) > -{ > - if (!validate_arg (arg, REAL_TYPE)) > - return NULL_TREE; > - > - /* Optimize floor of constant value. */ > - if (TREE_CODE (arg) == REAL_CST && !TREE_OVERFLOW (arg)) > - { > - REAL_VALUE_TYPE x; > - > - x = TREE_REAL_CST (arg); > - if (! REAL_VALUE_ISNAN (x) || ! flag_errno_math) > - { > - tree type = TREE_TYPE (TREE_TYPE (fndecl)); > - REAL_VALUE_TYPE r; > - > - real_floor (&r, TYPE_MODE (type), &x); > - return build_real (type, r); > - } > - } > - > - /* Fold floor (x) where x is nonnegative to trunc (x). */ > - if (tree_expr_nonnegative_p (arg)) > - { > - tree truncfn = mathfn_built_in (TREE_TYPE (arg), BUILT_IN_TRUNC); > - if (truncfn) > - return build_call_expr_loc (loc, truncfn, 1, arg); > - } > - > - return fold_trunc_transparent_mathfn (loc, fndecl, arg); > -} > - > -/* Fold function call to builtin ceil, ceilf or ceill with argument ARG. > - Return NULL_TREE if no simplification can be made. */ > - > -static tree > -fold_builtin_ceil (location_t loc, tree fndecl, tree arg) > -{ > - if (!validate_arg (arg, REAL_TYPE)) > - return NULL_TREE; > - > - /* Optimize ceil of constant value. */ > - if (TREE_CODE (arg) == REAL_CST && !TREE_OVERFLOW (arg)) > - { > - REAL_VALUE_TYPE x; > - > - x = TREE_REAL_CST (arg); > - if (! REAL_VALUE_ISNAN (x) || ! flag_errno_math) > - { > - tree type = TREE_TYPE (TREE_TYPE (fndecl)); > - REAL_VALUE_TYPE r; > - > - real_ceil (&r, TYPE_MODE (type), &x); > - return build_real (type, r); > - } > - } > - > - return fold_trunc_transparent_mathfn (loc, fndecl, arg); > -} > - > -/* Fold function call to builtin round, roundf or roundl with argument ARG. > - Return NULL_TREE if no simplification can be made. */ > - > -static tree > -fold_builtin_round (location_t loc, tree fndecl, tree arg) > -{ > - if (!validate_arg (arg, REAL_TYPE)) > - return NULL_TREE; > - > - /* Optimize round of constant value. */ > - if (TREE_CODE (arg) == REAL_CST && !TREE_OVERFLOW (arg)) > - { > - REAL_VALUE_TYPE x; > - > - x = TREE_REAL_CST (arg); > - if (! REAL_VALUE_ISNAN (x) || ! flag_errno_math) > - { > - tree type = TREE_TYPE (TREE_TYPE (fndecl)); > - REAL_VALUE_TYPE r; > - > - real_round (&r, TYPE_MODE (type), &x); > - return build_real (type, r); > - } > - } > - > - return fold_trunc_transparent_mathfn (loc, fndecl, arg); > -} > - > /* Fold function call to builtin lround, lroundf or lroundl (or the > corresponding long long versions) and other rounding functions. ARG > is the argument to the call. Return NULL_TREE if no simplification > @@ -9631,20 +9399,56 @@ fold_builtin_1 (location_t loc, tree fndecl, tree arg0) > return fold_builtin_nan (arg0, type, false); > > CASE_FLT_FN (BUILT_IN_FLOOR): > - return fold_builtin_floor (loc, fndecl, arg0); > + if (TREE_CODE (arg0) == REAL_CST && !TREE_OVERFLOW (arg0)) > + { > + REAL_VALUE_TYPE x = TREE_REAL_CST (arg0); > + if (!REAL_VALUE_ISNAN (x) || !flag_errno_math) > + { > + tree type = TREE_TYPE (TREE_TYPE (fndecl)); > + REAL_VALUE_TYPE r; > + real_floor (&r, TYPE_MODE (type), &x); > + return build_real (type, r); > + } > + } > + break; > > CASE_FLT_FN (BUILT_IN_CEIL): > - return fold_builtin_ceil (loc, fndecl, arg0); > + if (TREE_CODE (arg0) == REAL_CST && !TREE_OVERFLOW (arg0)) > + { > + REAL_VALUE_TYPE x = TREE_REAL_CST (arg0); > + if (!REAL_VALUE_ISNAN (x) || !flag_errno_math) > + { > + tree type = TREE_TYPE (TREE_TYPE (fndecl)); > + REAL_VALUE_TYPE r; > + real_ceil (&r, TYPE_MODE (type), &x); > + return build_real (type, r); > + } > + } > + break; > > CASE_FLT_FN (BUILT_IN_TRUNC): > - return fold_builtin_trunc (loc, fndecl, arg0); > + if (TREE_CODE (arg0) == REAL_CST && !TREE_OVERFLOW (arg0)) > + { > + REAL_VALUE_TYPE x = TREE_REAL_CST (arg0); > + REAL_VALUE_TYPE r; > + real_trunc (&r, TYPE_MODE (type), &x); > + return build_real (type, r); > + } > + break; > > CASE_FLT_FN (BUILT_IN_ROUND): > - return fold_builtin_round (loc, fndecl, arg0); > - > - CASE_FLT_FN (BUILT_IN_NEARBYINT): > - CASE_FLT_FN (BUILT_IN_RINT): > - return fold_trunc_transparent_mathfn (loc, fndecl, arg0); > + if (TREE_CODE (arg0) == REAL_CST && !TREE_OVERFLOW (arg0)) > + { > + REAL_VALUE_TYPE x = TREE_REAL_CST (arg0); > + if (!REAL_VALUE_ISNAN (x) || !flag_errno_math) > + { > + tree type = TREE_TYPE (TREE_TYPE (fndecl)); > + REAL_VALUE_TYPE r; > + real_round (&r, TYPE_MODE (type), &x); > + return build_real (type, r); > + } > + } > + break; > > CASE_FLT_FN (BUILT_IN_ICEIL): > CASE_FLT_FN (BUILT_IN_LCEIL): > diff --git a/gcc/convert.c b/gcc/convert.c > index bff2978..498d3a5 100644 > --- a/gcc/convert.c > +++ b/gcc/convert.c > @@ -225,37 +225,6 @@ convert_to_real (tree type, tree expr) > break; > } > } > - if (optimize > - && (((fcode == BUILT_IN_FLOORL > - || fcode == BUILT_IN_CEILL > - || fcode == BUILT_IN_ROUNDL > - || fcode == BUILT_IN_RINTL > - || fcode == BUILT_IN_TRUNCL > - || fcode == BUILT_IN_NEARBYINTL) > - && (TYPE_MODE (type) == TYPE_MODE (double_type_node) > - || TYPE_MODE (type) == TYPE_MODE (float_type_node))) > - || ((fcode == BUILT_IN_FLOOR > - || fcode == BUILT_IN_CEIL > - || fcode == BUILT_IN_ROUND > - || fcode == BUILT_IN_RINT > - || fcode == BUILT_IN_TRUNC > - || fcode == BUILT_IN_NEARBYINT) > - && (TYPE_MODE (type) == TYPE_MODE (float_type_node))))) > - { > - tree fn = mathfn_built_in (type, fcode); > - > - if (fn) > - { > - tree arg = strip_float_extensions (CALL_EXPR_ARG (expr, 0)); > - > - /* Make sure (type)arg0 is an extension, otherwise we could end up > - changing (float)floor(double d) into floorf((float)d), which is > - incorrect because (float)d uses round-to-nearest and can round > - up to the next integer. */ > - if (TYPE_PRECISION (type) >= TYPE_PRECISION (TREE_TYPE (arg))) > - return build_call_expr (fn, 1, fold (convert_to_real (type, arg))); > - } > - } > > /* Propagate the cast into the operation. */ > if (itype != type && FLOAT_TYPE_P (type)) > diff --git a/gcc/fold-const.c b/gcc/fold-const.c > index c4be017..6eed7b6 100644 > --- a/gcc/fold-const.c > +++ b/gcc/fold-const.c > @@ -12896,10 +12896,6 @@ tree_binary_nonnegative_warnv_p (enum tree_code code, tree type, tree op0, > bool > tree_single_nonnegative_warnv_p (tree t, bool *strict_overflow_p, int depth) > { > - if (TREE_CODE (t) == SSA_NAME > - && name_registered_for_update_p (t)) > - return false; > - > if (TYPE_UNSIGNED (TREE_TYPE (t))) > return true; > > @@ -12923,11 +12919,11 @@ tree_single_nonnegative_warnv_p (tree t, bool *strict_overflow_p, int depth) > If this code misses important cases that unbounded recursion > would not, passes that need this information could be revised > to provide it through dataflow propagation. */ > - if (depth < PARAM_VALUE (PARAM_MAX_SSA_NAME_QUERY_DEPTH)) > - return gimple_stmt_nonnegative_warnv_p (SSA_NAME_DEF_STMT (t), > - strict_overflow_p, depth); > + return (!name_registered_for_update_p (t) > + && depth < PARAM_VALUE (PARAM_MAX_SSA_NAME_QUERY_DEPTH) > + && gimple_stmt_nonnegative_warnv_p (SSA_NAME_DEF_STMT (t), > + strict_overflow_p, depth)); > > - /* Fallthru. */ > default: > return tree_simple_nonnegative_warnv_p (TREE_CODE (t), TREE_TYPE (t)); > } > @@ -13440,6 +13436,216 @@ tree_single_nonzero_warnv_p (tree t, bool *strict_overflow_p) > return false; > } > > +#define integer_valued_real_p(X) \ > + _Pragma ("GCC error \"Use RECURSE for recursive calls\"") 0 > + > +#define RECURSE(X) \ > + ((integer_valued_real_p) (X, depth + 1)) > + > +/* Return true if the floating point result of (CODE OP0) has an > + integer value. We also allow +Inf, -Inf and NaN to be considered > + integer values. > + > + DEPTH is the current nesting depth of the query. */ > + > +bool > +integer_valued_real_unary_p (tree_code code, tree op0, int depth) > +{ > + switch (code) > + { > + case FLOAT_EXPR: > + return true; > + > + case ABS_EXPR: > + return RECURSE (op0); > + > + CASE_CONVERT: > + { > + tree type = TREE_TYPE (op0); > + if (TREE_CODE (type) == INTEGER_TYPE) > + return true; > + if (TREE_CODE (type) == REAL_TYPE) > + return RECURSE (op0); > + break; > + } > + > + default: > + break; > + } > + return false; > +} > + > +/* Return true if the floating point result of (CODE OP0 OP1) has an > + integer value. We also allow +Inf, -Inf and NaN to be considered > + integer values. > + > + DEPTH is the current nesting depth of the query. */ > + > +bool > +integer_valued_real_binary_p (tree_code code, tree op0, tree op1, int depth) > +{ > + switch (code) > + { > + case PLUS_EXPR: > + case MINUS_EXPR: > + case MULT_EXPR: > + case MIN_EXPR: > + case MAX_EXPR: > + return RECURSE (op0) && RECURSE (op1); > + > + default: > + break; > + } > + return false; > +} > + > +/* Return true if the floating point result of calling FNDECL with arguments > + ARG0 and ARG1 has an integer value. We also allow +Inf, -Inf and NaN to be > + considered integer values. If FNDECL takes fewer than 2 arguments, > + the remaining ARGn are null. > + > + DEPTH is the current nesting depth of the query. */ > + > +bool > +integer_valued_real_call_p (tree fndecl, tree arg0, tree arg1, int depth) > +{ > + if (fndecl && DECL_BUILT_IN_CLASS (fndecl) == BUILT_IN_NORMAL) > + switch (DECL_FUNCTION_CODE (fndecl)) > + { > + CASE_FLT_FN (BUILT_IN_CEIL): > + CASE_FLT_FN (BUILT_IN_FLOOR): > + CASE_FLT_FN (BUILT_IN_NEARBYINT): > + CASE_FLT_FN (BUILT_IN_RINT): > + CASE_FLT_FN (BUILT_IN_ROUND): > + CASE_FLT_FN (BUILT_IN_TRUNC): > + return true; > + > + CASE_FLT_FN (BUILT_IN_FMIN): > + CASE_FLT_FN (BUILT_IN_FMAX): > + return RECURSE (arg0) && RECURSE (arg1); > + > + default: > + break; > + } > + return false; > +} > + > +/* Return true if the floating point expression T (a GIMPLE_SINGLE_RHS) > + has an integer value. We also allow +Inf, -Inf and NaN to be > + considered integer values. > + > + DEPTH is the current nesting depth of the query. */ > + > +bool > +integer_valued_real_single_p (tree t, int depth) > +{ > + switch (TREE_CODE (t)) > + { > + case REAL_CST: > + return real_isinteger (TREE_REAL_CST_PTR (t), TYPE_MODE (TREE_TYPE (t))); > + > + case COND_EXPR: > + return RECURSE (TREE_OPERAND (t, 1)) && RECURSE (TREE_OPERAND (t, 2)); > + > + case SSA_NAME: > + /* Limit the depth of recursion to avoid quadratic behavior. > + This is expected to catch almost all occurrences in practice. > + If this code misses important cases that unbounded recursion > + would not, passes that need this information could be revised > + to provide it through dataflow propagation. */ > + return (!name_registered_for_update_p (t) > + && depth < PARAM_VALUE (PARAM_MAX_SSA_NAME_QUERY_DEPTH) > + && gimple_stmt_integer_valued_real_p (SSA_NAME_DEF_STMT (t), > + depth)); > + > + default: > + break; > + } > + return false; > +} > + > +/* Return true if the floating point expression T (a GIMPLE_INVALID_RHS) > + has an integer value. We also allow +Inf, -Inf and NaN to be > + considered integer values. > + > + DEPTH is the current nesting depth of the query. */ > + > +static bool > +integer_valued_real_invalid_p (tree t, int depth) > +{ > + switch (TREE_CODE (t)) > + { > + case COMPOUND_EXPR: > + case MODIFY_EXPR: > + case BIND_EXPR: > + return RECURSE (TREE_OPERAND (t, 1)); > + > + case SAVE_EXPR: > + return RECURSE (TREE_OPERAND (t, 0)); > + > + default: > + break; > + } > + return false; > +} > + > +#undef RECURSE > +#undef integer_valued_real_p > + > +/* Return true if the floating point expression T has an integer value. > + We also allow +Inf, -Inf and NaN to be considered integer values. > + > + DEPTH is the current nesting depth of the query. */ > + > +bool > +integer_valued_real_p (tree t, int depth) > +{ > + if (t == error_mark_node) > + return false; > + > + tree_code code = TREE_CODE (t); > + switch (TREE_CODE_CLASS (code)) > + { > + case tcc_binary: > + case tcc_comparison: > + return integer_valued_real_binary_p (code, TREE_OPERAND (t, 0), > + TREE_OPERAND (t, 1), depth); > + > + case tcc_unary: > + return integer_valued_real_unary_p (code, TREE_OPERAND (t, 0), depth); > + > + case tcc_constant: > + case tcc_declaration: > + case tcc_reference: > + return integer_valued_real_single_p (t, depth); > + > + default: > + break; > + } > + > + switch (code) > + { > + case COND_EXPR: > + case SSA_NAME: > + return integer_valued_real_single_p (t, depth); > + > + case CALL_EXPR: > + { > + tree arg0 = (call_expr_nargs (t) > 0 > + ? CALL_EXPR_ARG (t, 0) > + : NULL_TREE); > + tree arg1 = (call_expr_nargs (t) > 1 > + ? CALL_EXPR_ARG (t, 1) > + : NULL_TREE); > + return integer_valued_real_call_p (get_callee_fndecl (t), > + arg0, arg1, depth); > + } > + > + default: > + return integer_valued_real_invalid_p (t, depth); > + } > +} > + > /* Given the components of a binary expression CODE, TYPE, OP0 and OP1, > attempt to fold the expression to a constant without modifying TYPE, > OP0 or OP1. > diff --git a/gcc/fold-const.h b/gcc/fold-const.h > index 1bb68e4..8e49c98 100644 > --- a/gcc/fold-const.h > +++ b/gcc/fold-const.h > @@ -139,6 +139,12 @@ extern bool tree_single_nonnegative_warnv_p (tree, bool *, int); > extern bool tree_call_nonnegative_warnv_p (tree, tree, tree, tree, bool *, > int); > > +extern bool integer_valued_real_unary_p (tree_code, tree, int); > +extern bool integer_valued_real_binary_p (tree_code, tree, tree, int); > +extern bool integer_valued_real_call_p (tree, tree, tree, int); > +extern bool integer_valued_real_single_p (tree, int); > +extern bool integer_valued_real_p (tree, int = 0); > + > extern bool fold_real_zero_addition_p (const_tree, const_tree, int); > extern tree combine_comparisons (location_t, enum tree_code, enum tree_code, > enum tree_code, tree, tree, tree); > diff --git a/gcc/gimple-fold.c b/gcc/gimple-fold.c > index 85ff018..1869c09 100644 > --- a/gcc/gimple-fold.c > +++ b/gcc/gimple-fold.c > @@ -6266,3 +6266,91 @@ gimple_stmt_nonnegative_warnv_p (gimple *stmt, bool *strict_overflow_p, > return false; > } > } > + > +/* Return true if the floating-point value computed by assignment STMT > + is known to have an integer value. We also allow +Inf, -Inf and NaN > + to be considered integer values. > + > + DEPTH is the current nesting depth of the query. */ > + > +static bool > +gimple_assign_integer_valued_real_p (gimple *stmt, int depth) > +{ > + enum tree_code code = gimple_assign_rhs_code (stmt); > + switch (get_gimple_rhs_class (code)) > + { > + case GIMPLE_UNARY_RHS: > + return integer_valued_real_unary_p (gimple_assign_rhs_code (stmt), > + gimple_assign_rhs1 (stmt), depth); > + case GIMPLE_BINARY_RHS: > + return integer_valued_real_binary_p (gimple_assign_rhs_code (stmt), > + gimple_assign_rhs1 (stmt), > + gimple_assign_rhs2 (stmt), depth); > + case GIMPLE_TERNARY_RHS: > + return false; > + case GIMPLE_SINGLE_RHS: > + return integer_valued_real_single_p (gimple_assign_rhs1 (stmt), depth); > + case GIMPLE_INVALID_RHS: > + break; > + } > + gcc_unreachable (); > +} > + > +/* Return true if the floating-point value computed by call STMT is known > + to have an integer value. We also allow +Inf, -Inf and NaN to be > + considered integer values. > + > + DEPTH is the current nesting depth of the query. */ > + > +static bool > +gimple_call_integer_valued_real_p (gimple *stmt, int depth) > +{ > + tree arg0 = (gimple_call_num_args (stmt) > 0 > + ? gimple_call_arg (stmt, 0) > + : NULL_TREE); > + tree arg1 = (gimple_call_num_args (stmt) > 1 > + ? gimple_call_arg (stmt, 1) > + : NULL_TREE); > + return integer_valued_real_call_p (gimple_call_fndecl (stmt), > + arg0, arg1, depth); > +} > + > +/* Return true if the floating-point result of phi STMT is known to have > + an integer value. We also allow +Inf, -Inf and NaN to be considered > + integer values. > + > + DEPTH is the current nesting depth of the query. */ > + > +static bool > +gimple_phi_integer_valued_real_p (gimple *stmt, int depth) > +{ > + for (unsigned i = 0; i < gimple_phi_num_args (stmt); ++i) > + { > + tree arg = gimple_phi_arg_def (stmt, i); > + if (!integer_valued_real_single_p (arg, depth + 1)) > + return false; > + } > + return true; > +} > + > +/* Return true if the floating-point value computed by STMT is known > + to have an integer value. We also allow +Inf, -Inf and NaN to be > + considered integer values. > + > + DEPTH is the current nesting depth of the query. */ > + > +bool > +gimple_stmt_integer_valued_real_p (gimple *stmt, int depth) > +{ > + switch (gimple_code (stmt)) > + { > + case GIMPLE_ASSIGN: > + return gimple_assign_integer_valued_real_p (stmt, depth); > + case GIMPLE_CALL: > + return gimple_call_integer_valued_real_p (stmt, depth); > + case GIMPLE_PHI: > + return gimple_phi_integer_valued_real_p (stmt, depth); > + default: > + return false; > + } > +} > diff --git a/gcc/gimple-fold.h b/gcc/gimple-fold.h > index 61edd69..9c24f77 100644 > --- a/gcc/gimple-fold.h > +++ b/gcc/gimple-fold.h > @@ -120,6 +120,7 @@ gimple_convert_to_ptrofftype (gimple_seq *seq, tree op) > } > > extern bool gimple_stmt_nonnegative_warnv_p (gimple *, bool *, int = 0); > +extern bool gimple_stmt_integer_valued_real_p (gimple *, int = 0); > > /* In gimple-match.c. */ > extern tree gimple_simplify (enum tree_code, tree, tree, > diff --git a/gcc/match.pd b/gcc/match.pd > index 98bb903..f7b7792 100644 > --- a/gcc/match.pd > +++ b/gcc/match.pd > @@ -31,6 +31,7 @@ along with GCC; see the file COPYING3. If not see > zerop > CONSTANT_CLASS_P > tree_expr_nonnegative_p > + integer_valued_real_p > integer_pow2p > HONOR_NANS) > > @@ -71,6 +72,14 @@ along with GCC; see the file COPYING3. If not see > BUILT_IN_COPYSIGN > BUILT_IN_COPYSIGNL) > (define_operator_list CABS BUILT_IN_CABSF BUILT_IN_CABS BUILT_IN_CABSL) > +(define_operator_list TRUNC BUILT_IN_TRUNCF BUILT_IN_TRUNC BUILT_IN_TRUNCL) > +(define_operator_list FLOOR BUILT_IN_FLOORF BUILT_IN_FLOOR BUILT_IN_FLOORL) > +(define_operator_list CEIL BUILT_IN_CEILF BUILT_IN_CEIL BUILT_IN_CEILL) > +(define_operator_list ROUND BUILT_IN_ROUNDF BUILT_IN_ROUND BUILT_IN_ROUNDL) > +(define_operator_list NEARBYINT BUILT_IN_NEARBYINTF > + BUILT_IN_NEARBYINT > + BUILT_IN_NEARBYINTL) > +(define_operator_list RINT BUILT_IN_RINTF BUILT_IN_RINT BUILT_IN_RINTL) > > /* Simplifications of operations with one constant operand and > simplifications to constants or single values. */ > @@ -2445,6 +2454,23 @@ along with GCC; see the file COPYING3. If not see > (CABS (complex:c @0 real_zerop@1)) > (abs @0)) > > +/* trunc(trunc(x)) -> trunc(x), etc. */ > +(for fns (TRUNC FLOOR CEIL ROUND NEARBYINT RINT) > + (simplify > + (fns (fns @0)) > + (fns @0))) > +/* f(x) -> x if x is integer valued and f does nothing for such values. */ > +(for fns (TRUNC FLOOR CEIL ROUND NEARBYINT) > + (simplify > + (fns integer_valued_real_p@0) > + @0)) > +/* Same for rint. We have to check flag_errno_math because > + integer_valued_real_p accepts +Inf, -Inf and NaNs as integers. */ > +(if (!flag_errno_math) > + (simplify > + (RINT integer_valued_real_p@0) > + @0)) > + > /* Canonicalization of sequences of math builtins. These rules represent > IL simplifications but are not necessarily optimizations. > > @@ -2554,6 +2580,57 @@ along with GCC; see the file COPYING3. If not see > (mult (exps@1 (realpart @0)) (realpart (cexpis:type@2 (imagpart @0)))) > (mult @1 (imagpart @2))))))) > > +(if (canonicalize_math_p ()) > + /* floor(x) -> trunc(x) if x is nonnegative. */ > + (for floors (FLOOR) > + truncs (TRUNC) > + (simplify > + (floors tree_expr_nonnegative_p@0) > + (truncs @0)))) > + > +(match double_value_p > + @0 > + (if (TYPE_MAIN_VARIANT (TREE_TYPE (@0)) == double_type_node))) > +(for froms (BUILT_IN_TRUNCL > + BUILT_IN_FLOORL > + BUILT_IN_CEILL > + BUILT_IN_ROUNDL > + BUILT_IN_NEARBYINTL > + BUILT_IN_RINTL) > + tos (BUILT_IN_TRUNC > + BUILT_IN_FLOOR > + BUILT_IN_CEIL > + BUILT_IN_ROUND > + BUILT_IN_NEARBYINT > + BUILT_IN_RINT) > + /* truncl(extend(x)) -> extend(trunc(x)), etc., if x is a double. */ > + (if (optimize && canonicalize_math_p ()) > + (simplify > + (froms (convert double_value_p@0)) > + (convert (tos @0))))) > + > +(match float_value_p > + @0 > + (if (TYPE_MAIN_VARIANT (TREE_TYPE (@0)) == float_type_node))) > +(for froms (BUILT_IN_TRUNCL BUILT_IN_TRUNC > + BUILT_IN_FLOORL BUILT_IN_FLOOR > + BUILT_IN_CEILL BUILT_IN_CEIL > + BUILT_IN_ROUNDL BUILT_IN_ROUND > + BUILT_IN_NEARBYINTL BUILT_IN_NEARBYINT > + BUILT_IN_RINTL BUILT_IN_RINT) > + tos (BUILT_IN_TRUNCF BUILT_IN_TRUNCF > + BUILT_IN_FLOORF BUILT_IN_FLOORF > + BUILT_IN_CEILF BUILT_IN_CEILF > + BUILT_IN_ROUNDF BUILT_IN_ROUNDF > + BUILT_IN_NEARBYINTF BUILT_IN_NEARBYINTF > + BUILT_IN_RINTF BUILT_IN_RINTF) > + /* truncl(extend(x)) and trunc(extend(x)) -> extend(truncf(x)), etc., > + if x is a float. */ > + (if (optimize && canonicalize_math_p ()) > + (simplify > + (froms (convert float_value_p@0)) > + (convert (tos @0))))) > + > /* cproj(x) -> x if we're ignoring infinities. */ > (simplify > (CPROJ @0) > diff --git a/gcc/testsuite/gcc.c-torture/execute/20030125-1.c b/gcc/testsuite/gcc.c-torture/execute/20030125-1.c > index 60ede34..960552c3 100644 > --- a/gcc/testsuite/gcc.c-torture/execute/20030125-1.c > +++ b/gcc/testsuite/gcc.c-torture/execute/20030125-1.c > @@ -1,5 +1,6 @@ > /* Verify whether math functions are simplified. */ > /* { dg-require-effective-target c99_runtime } */ > +/* { dg-require-weak } */ > double sin(double); > double floor(double); > float > @@ -29,25 +30,25 @@ main() > #endif > return 0; > } > -__attribute__ ((noinline)) > +__attribute__ ((weak)) > double > floor(double a) > { > abort (); > } > -__attribute__ ((noinline)) > +__attribute__ ((weak)) > float > floorf(float a) > { > return a; > } > -__attribute__ ((noinline)) > +__attribute__ ((weak)) > double > sin(double a) > { > return a; > } > -__attribute__ ((noinline)) > +__attribute__ ((weak)) > float > sinf(float a) > { > diff --git a/gcc/testsuite/gcc.dg/builtins-57.c b/gcc/testsuite/gcc.dg/builtins-57.c > index 361826c..18d40e8 100644 > --- a/gcc/testsuite/gcc.dg/builtins-57.c > +++ b/gcc/testsuite/gcc.dg/builtins-57.c > @@ -1,5 +1,5 @@ > /* { dg-do link } */ > -/* { dg-options "-std=c99 -ffinite-math-only" } */ > +/* { dg-options "-std=c99 -ffinite-math-only -O" } */ > > #include "builtins-config.h" > > diff --git a/gcc/testsuite/gcc.dg/torture/builtin-integral-1.c b/gcc/testsuite/gcc.dg/torture/builtin-integral-1.c > index 522646d..f3c3338 100644 > --- a/gcc/testsuite/gcc.dg/torture/builtin-integral-1.c > +++ b/gcc/testsuite/gcc.dg/torture/builtin-integral-1.c > @@ -10,6 +10,7 @@ > that various math functions are marked const/pure and can be > folded. */ > /* { dg-options "-ffinite-math-only -fno-math-errno" } */ > +/* { dg-skip-if "" { *-*-* } { "-O0" } { "" } } */ > > extern int link_failure (int); > >
diff --git a/gcc/builtins.c b/gcc/builtins.c index e5e65ba..c70bbfd 100644 --- a/gcc/builtins.c +++ b/gcc/builtins.c @@ -154,16 +154,10 @@ static tree fold_builtin_inf (location_t, tree, int); static tree fold_builtin_nan (tree, tree, int); static tree rewrite_call_expr (location_t, tree, int, tree, int, ...); static bool validate_arg (const_tree, enum tree_code code); -static bool integer_valued_real_p (tree); -static tree fold_trunc_transparent_mathfn (location_t, tree, tree); static rtx expand_builtin_fabs (tree, rtx, rtx); static rtx expand_builtin_signbit (tree, rtx); static tree fold_builtin_pow (location_t, tree, tree, tree, tree); static tree fold_builtin_powi (location_t, tree, tree, tree, tree); -static tree fold_builtin_trunc (location_t, tree, tree); -static tree fold_builtin_floor (location_t, tree, tree); -static tree fold_builtin_ceil (location_t, tree, tree); -static tree fold_builtin_round (location_t, tree, tree); static tree fold_builtin_int_roundingfn (location_t, tree, tree); static tree fold_builtin_bitop (tree, tree); static tree fold_builtin_strchr (location_t, tree, tree, tree); @@ -7320,117 +7314,6 @@ fold_builtin_nan (tree arg, tree type, int quiet) return build_real (type, real); } -/* Return true if the floating point expression T has an integer value. - We also allow +Inf, -Inf and NaN to be considered integer values. */ - -static bool -integer_valued_real_p (tree t) -{ - switch (TREE_CODE (t)) - { - case FLOAT_EXPR: - return true; - - case ABS_EXPR: - case SAVE_EXPR: - return integer_valued_real_p (TREE_OPERAND (t, 0)); - - case COMPOUND_EXPR: - case MODIFY_EXPR: - case BIND_EXPR: - return integer_valued_real_p (TREE_OPERAND (t, 1)); - - case PLUS_EXPR: - case MINUS_EXPR: - case MULT_EXPR: - case MIN_EXPR: - case MAX_EXPR: - return integer_valued_real_p (TREE_OPERAND (t, 0)) - && integer_valued_real_p (TREE_OPERAND (t, 1)); - - case COND_EXPR: - return integer_valued_real_p (TREE_OPERAND (t, 1)) - && integer_valued_real_p (TREE_OPERAND (t, 2)); - - case REAL_CST: - return real_isinteger (TREE_REAL_CST_PTR (t), TYPE_MODE (TREE_TYPE (t))); - - CASE_CONVERT: - { - tree type = TREE_TYPE (TREE_OPERAND (t, 0)); - if (TREE_CODE (type) == INTEGER_TYPE) - return true; - if (TREE_CODE (type) == REAL_TYPE) - return integer_valued_real_p (TREE_OPERAND (t, 0)); - break; - } - - case CALL_EXPR: - switch (builtin_mathfn_code (t)) - { - CASE_FLT_FN (BUILT_IN_CEIL): - CASE_FLT_FN (BUILT_IN_FLOOR): - CASE_FLT_FN (BUILT_IN_NEARBYINT): - CASE_FLT_FN (BUILT_IN_RINT): - CASE_FLT_FN (BUILT_IN_ROUND): - CASE_FLT_FN (BUILT_IN_TRUNC): - return true; - - CASE_FLT_FN (BUILT_IN_FMIN): - CASE_FLT_FN (BUILT_IN_FMAX): - return integer_valued_real_p (CALL_EXPR_ARG (t, 0)) - && integer_valued_real_p (CALL_EXPR_ARG (t, 1)); - - default: - break; - } - break; - - default: - break; - } - return false; -} - -/* FNDECL is assumed to be a builtin where truncation can be propagated - across (for instance floor((double)f) == (double)floorf (f). - Do the transformation for a call with argument ARG. */ - -static tree -fold_trunc_transparent_mathfn (location_t loc, tree fndecl, tree arg) -{ - enum built_in_function fcode = DECL_FUNCTION_CODE (fndecl); - - if (!validate_arg (arg, REAL_TYPE)) - return NULL_TREE; - - /* Integer rounding functions are idempotent. */ - if (fcode == builtin_mathfn_code (arg)) - return arg; - - /* If argument is already integer valued, and we don't need to worry - about setting errno, there's no need to perform rounding. */ - if (! flag_errno_math && integer_valued_real_p (arg)) - return arg; - - if (optimize) - { - tree arg0 = strip_float_extensions (arg); - tree ftype = TREE_TYPE (TREE_TYPE (fndecl)); - tree newtype = TREE_TYPE (arg0); - tree decl; - - if (TYPE_PRECISION (newtype) < TYPE_PRECISION (ftype) - && (decl = mathfn_built_in (newtype, fcode))) - return fold_convert_loc (loc, ftype, - build_call_expr_loc (loc, decl, 1, - fold_convert_loc (loc, - newtype, - arg0))); - } - return NULL_TREE; -} - /* FNDECL is assumed to be builtin which can narrow the FP type of the argument, for instance lround((double)f) -> lroundf (f). Do the transformation for a call with argument ARG. */ @@ -7577,121 +7460,6 @@ fold_builtin_sincos (location_t loc, build1 (REALPART_EXPR, type, call))); } -/* Fold function call to builtin trunc, truncf or truncl with argument ARG. - Return NULL_TREE if no simplification can be made. */ - -static tree -fold_builtin_trunc (location_t loc, tree fndecl, tree arg) -{ - if (!validate_arg (arg, REAL_TYPE)) - return NULL_TREE; - - /* Optimize trunc of constant value. */ - if (TREE_CODE (arg) == REAL_CST && !TREE_OVERFLOW (arg)) - { - REAL_VALUE_TYPE r, x; - tree type = TREE_TYPE (TREE_TYPE (fndecl)); - - x = TREE_REAL_CST (arg); - real_trunc (&r, TYPE_MODE (type), &x); - return build_real (type, r); - } - - return fold_trunc_transparent_mathfn (loc, fndecl, arg); -} - -/* Fold function call to builtin floor, floorf or floorl with argument ARG. - Return NULL_TREE if no simplification can be made. */ - -static tree -fold_builtin_floor (location_t loc, tree fndecl, tree arg) -{ - if (!validate_arg (arg, REAL_TYPE)) - return NULL_TREE; - - /* Optimize floor of constant value. */ - if (TREE_CODE (arg) == REAL_CST && !TREE_OVERFLOW (arg)) - { - REAL_VALUE_TYPE x; - - x = TREE_REAL_CST (arg); - if (! REAL_VALUE_ISNAN (x) || ! flag_errno_math) - { - tree type = TREE_TYPE (TREE_TYPE (fndecl)); - REAL_VALUE_TYPE r; - - real_floor (&r, TYPE_MODE (type), &x); - return build_real (type, r); - } - } - - /* Fold floor (x) where x is nonnegative to trunc (x). */ - if (tree_expr_nonnegative_p (arg)) - { - tree truncfn = mathfn_built_in (TREE_TYPE (arg), BUILT_IN_TRUNC); - if (truncfn) - return build_call_expr_loc (loc, truncfn, 1, arg); - } - - return fold_trunc_transparent_mathfn (loc, fndecl, arg); -} - -/* Fold function call to builtin ceil, ceilf or ceill with argument ARG. - Return NULL_TREE if no simplification can be made. */ - -static tree -fold_builtin_ceil (location_t loc, tree fndecl, tree arg) -{ - if (!validate_arg (arg, REAL_TYPE)) - return NULL_TREE; - - /* Optimize ceil of constant value. */ - if (TREE_CODE (arg) == REAL_CST && !TREE_OVERFLOW (arg)) - { - REAL_VALUE_TYPE x; - - x = TREE_REAL_CST (arg); - if (! REAL_VALUE_ISNAN (x) || ! flag_errno_math) - { - tree type = TREE_TYPE (TREE_TYPE (fndecl)); - REAL_VALUE_TYPE r; - - real_ceil (&r, TYPE_MODE (type), &x); - return build_real (type, r); - } - } - - return fold_trunc_transparent_mathfn (loc, fndecl, arg); -} - -/* Fold function call to builtin round, roundf or roundl with argument ARG. - Return NULL_TREE if no simplification can be made. */ - -static tree -fold_builtin_round (location_t loc, tree fndecl, tree arg) -{ - if (!validate_arg (arg, REAL_TYPE)) - return NULL_TREE; - - /* Optimize round of constant value. */ - if (TREE_CODE (arg) == REAL_CST && !TREE_OVERFLOW (arg)) - { - REAL_VALUE_TYPE x; - - x = TREE_REAL_CST (arg); - if (! REAL_VALUE_ISNAN (x) || ! flag_errno_math) - { - tree type = TREE_TYPE (TREE_TYPE (fndecl)); - REAL_VALUE_TYPE r; - - real_round (&r, TYPE_MODE (type), &x); - return build_real (type, r); - } - } - - return fold_trunc_transparent_mathfn (loc, fndecl, arg); -} - /* Fold function call to builtin lround, lroundf or lroundl (or the corresponding long long versions) and other rounding functions. ARG is the argument to the call. Return NULL_TREE if no simplification @@ -9631,20 +9399,56 @@ fold_builtin_1 (location_t loc, tree fndecl, tree arg0) return fold_builtin_nan (arg0, type, false); CASE_FLT_FN (BUILT_IN_FLOOR): - return fold_builtin_floor (loc, fndecl, arg0); + if (TREE_CODE (arg0) == REAL_CST && !TREE_OVERFLOW (arg0)) + { + REAL_VALUE_TYPE x = TREE_REAL_CST (arg0); + if (!REAL_VALUE_ISNAN (x) || !flag_errno_math) + { + tree type = TREE_TYPE (TREE_TYPE (fndecl)); + REAL_VALUE_TYPE r; + real_floor (&r, TYPE_MODE (type), &x); + return build_real (type, r); + } + } + break; CASE_FLT_FN (BUILT_IN_CEIL): - return fold_builtin_ceil (loc, fndecl, arg0); + if (TREE_CODE (arg0) == REAL_CST && !TREE_OVERFLOW (arg0)) + { + REAL_VALUE_TYPE x = TREE_REAL_CST (arg0); + if (!REAL_VALUE_ISNAN (x) || !flag_errno_math) + { + tree type = TREE_TYPE (TREE_TYPE (fndecl)); + REAL_VALUE_TYPE r; + real_ceil (&r, TYPE_MODE (type), &x); + return build_real (type, r); + } + } + break; CASE_FLT_FN (BUILT_IN_TRUNC): - return fold_builtin_trunc (loc, fndecl, arg0); + if (TREE_CODE (arg0) == REAL_CST && !TREE_OVERFLOW (arg0)) + { + REAL_VALUE_TYPE x = TREE_REAL_CST (arg0); + REAL_VALUE_TYPE r; + real_trunc (&r, TYPE_MODE (type), &x); + return build_real (type, r); + } + break; CASE_FLT_FN (BUILT_IN_ROUND): - return fold_builtin_round (loc, fndecl, arg0); - - CASE_FLT_FN (BUILT_IN_NEARBYINT): - CASE_FLT_FN (BUILT_IN_RINT): - return fold_trunc_transparent_mathfn (loc, fndecl, arg0); + if (TREE_CODE (arg0) == REAL_CST && !TREE_OVERFLOW (arg0)) + { + REAL_VALUE_TYPE x = TREE_REAL_CST (arg0); + if (!REAL_VALUE_ISNAN (x) || !flag_errno_math) + { + tree type = TREE_TYPE (TREE_TYPE (fndecl)); + REAL_VALUE_TYPE r; + real_round (&r, TYPE_MODE (type), &x); + return build_real (type, r); + } + } + break; CASE_FLT_FN (BUILT_IN_ICEIL): CASE_FLT_FN (BUILT_IN_LCEIL): diff --git a/gcc/convert.c b/gcc/convert.c index bff2978..498d3a5 100644 --- a/gcc/convert.c +++ b/gcc/convert.c @@ -225,37 +225,6 @@ convert_to_real (tree type, tree expr) break; } } - if (optimize - && (((fcode == BUILT_IN_FLOORL - || fcode == BUILT_IN_CEILL - || fcode == BUILT_IN_ROUNDL - || fcode == BUILT_IN_RINTL - || fcode == BUILT_IN_TRUNCL - || fcode == BUILT_IN_NEARBYINTL) - && (TYPE_MODE (type) == TYPE_MODE (double_type_node) - || TYPE_MODE (type) == TYPE_MODE (float_type_node))) - || ((fcode == BUILT_IN_FLOOR - || fcode == BUILT_IN_CEIL - || fcode == BUILT_IN_ROUND - || fcode == BUILT_IN_RINT - || fcode == BUILT_IN_TRUNC - || fcode == BUILT_IN_NEARBYINT) - && (TYPE_MODE (type) == TYPE_MODE (float_type_node))))) - { - tree fn = mathfn_built_in (type, fcode); - - if (fn) - { - tree arg = strip_float_extensions (CALL_EXPR_ARG (expr, 0)); - - /* Make sure (type)arg0 is an extension, otherwise we could end up - changing (float)floor(double d) into floorf((float)d), which is - incorrect because (float)d uses round-to-nearest and can round - up to the next integer. */ - if (TYPE_PRECISION (type) >= TYPE_PRECISION (TREE_TYPE (arg))) - return build_call_expr (fn, 1, fold (convert_to_real (type, arg))); - } - } /* Propagate the cast into the operation. */ if (itype != type && FLOAT_TYPE_P (type)) diff --git a/gcc/fold-const.c b/gcc/fold-const.c index c4be017..6eed7b6 100644 --- a/gcc/fold-const.c +++ b/gcc/fold-const.c @@ -12896,10 +12896,6 @@ tree_binary_nonnegative_warnv_p (enum tree_code code, tree type, tree op0, bool tree_single_nonnegative_warnv_p (tree t, bool *strict_overflow_p, int depth) { - if (TREE_CODE (t) == SSA_NAME - && name_registered_for_update_p (t)) - return false; - if (TYPE_UNSIGNED (TREE_TYPE (t))) return true; @@ -12923,11 +12919,11 @@ tree_single_nonnegative_warnv_p (tree t, bool *strict_overflow_p, int depth) If this code misses important cases that unbounded recursion would not, passes that need this information could be revised to provide it through dataflow propagation. */ - if (depth < PARAM_VALUE (PARAM_MAX_SSA_NAME_QUERY_DEPTH)) - return gimple_stmt_nonnegative_warnv_p (SSA_NAME_DEF_STMT (t), - strict_overflow_p, depth); + return (!name_registered_for_update_p (t) + && depth < PARAM_VALUE (PARAM_MAX_SSA_NAME_QUERY_DEPTH) + && gimple_stmt_nonnegative_warnv_p (SSA_NAME_DEF_STMT (t), + strict_overflow_p, depth)); - /* Fallthru. */ default: return tree_simple_nonnegative_warnv_p (TREE_CODE (t), TREE_TYPE (t)); } @@ -13440,6 +13436,216 @@ tree_single_nonzero_warnv_p (tree t, bool *strict_overflow_p) return false; } +#define integer_valued_real_p(X) \ + _Pragma ("GCC error \"Use RECURSE for recursive calls\"") 0 + +#define RECURSE(X) \ + ((integer_valued_real_p) (X, depth + 1)) + +/* Return true if the floating point result of (CODE OP0) has an + integer value. We also allow +Inf, -Inf and NaN to be considered + integer values. + + DEPTH is the current nesting depth of the query. */ + +bool +integer_valued_real_unary_p (tree_code code, tree op0, int depth) +{ + switch (code) + { + case FLOAT_EXPR: + return true; + + case ABS_EXPR: + return RECURSE (op0); + + CASE_CONVERT: + { + tree type = TREE_TYPE (op0); + if (TREE_CODE (type) == INTEGER_TYPE) + return true; + if (TREE_CODE (type) == REAL_TYPE) + return RECURSE (op0); + break; + } + + default: + break; + } + return false; +} + +/* Return true if the floating point result of (CODE OP0 OP1) has an + integer value. We also allow +Inf, -Inf and NaN to be considered + integer values. + + DEPTH is the current nesting depth of the query. */ + +bool +integer_valued_real_binary_p (tree_code code, tree op0, tree op1, int depth) +{ + switch (code) + { + case PLUS_EXPR: + case MINUS_EXPR: + case MULT_EXPR: + case MIN_EXPR: + case MAX_EXPR: + return RECURSE (op0) && RECURSE (op1); + + default: + break; + } + return false; +} + +/* Return true if the floating point result of calling FNDECL with arguments + ARG0 and ARG1 has an integer value. We also allow +Inf, -Inf and NaN to be + considered integer values. If FNDECL takes fewer than 2 arguments, + the remaining ARGn are null. + + DEPTH is the current nesting depth of the query. */ + +bool +integer_valued_real_call_p (tree fndecl, tree arg0, tree arg1, int depth) +{ + if (fndecl && DECL_BUILT_IN_CLASS (fndecl) == BUILT_IN_NORMAL) + switch (DECL_FUNCTION_CODE (fndecl)) + { + CASE_FLT_FN (BUILT_IN_CEIL): + CASE_FLT_FN (BUILT_IN_FLOOR): + CASE_FLT_FN (BUILT_IN_NEARBYINT): + CASE_FLT_FN (BUILT_IN_RINT): + CASE_FLT_FN (BUILT_IN_ROUND): + CASE_FLT_FN (BUILT_IN_TRUNC): + return true; + + CASE_FLT_FN (BUILT_IN_FMIN): + CASE_FLT_FN (BUILT_IN_FMAX): + return RECURSE (arg0) && RECURSE (arg1); + + default: + break; + } + return false; +} + +/* Return true if the floating point expression T (a GIMPLE_SINGLE_RHS) + has an integer value. We also allow +Inf, -Inf and NaN to be + considered integer values. + + DEPTH is the current nesting depth of the query. */ + +bool +integer_valued_real_single_p (tree t, int depth) +{ + switch (TREE_CODE (t)) + { + case REAL_CST: + return real_isinteger (TREE_REAL_CST_PTR (t), TYPE_MODE (TREE_TYPE (t))); + + case COND_EXPR: + return RECURSE (TREE_OPERAND (t, 1)) && RECURSE (TREE_OPERAND (t, 2)); + + case SSA_NAME: + /* Limit the depth of recursion to avoid quadratic behavior. + This is expected to catch almost all occurrences in practice. + If this code misses important cases that unbounded recursion + would not, passes that need this information could be revised + to provide it through dataflow propagation. */ + return (!name_registered_for_update_p (t) + && depth < PARAM_VALUE (PARAM_MAX_SSA_NAME_QUERY_DEPTH) + && gimple_stmt_integer_valued_real_p (SSA_NAME_DEF_STMT (t), + depth)); + + default: + break; + } + return false; +} + +/* Return true if the floating point expression T (a GIMPLE_INVALID_RHS) + has an integer value. We also allow +Inf, -Inf and NaN to be + considered integer values. + + DEPTH is the current nesting depth of the query. */ + +static bool +integer_valued_real_invalid_p (tree t, int depth) +{ + switch (TREE_CODE (t)) + { + case COMPOUND_EXPR: + case MODIFY_EXPR: + case BIND_EXPR: + return RECURSE (TREE_OPERAND (t, 1)); + + case SAVE_EXPR: + return RECURSE (TREE_OPERAND (t, 0)); + + default: + break; + } + return false; +} + +#undef RECURSE +#undef integer_valued_real_p + +/* Return true if the floating point expression T has an integer value. + We also allow +Inf, -Inf and NaN to be considered integer values. + + DEPTH is the current nesting depth of the query. */ + +bool +integer_valued_real_p (tree t, int depth) +{ + if (t == error_mark_node) + return false; + + tree_code code = TREE_CODE (t); + switch (TREE_CODE_CLASS (code)) + { + case tcc_binary: + case tcc_comparison: + return integer_valued_real_binary_p (code, TREE_OPERAND (t, 0), + TREE_OPERAND (t, 1), depth); + + case tcc_unary: + return integer_valued_real_unary_p (code, TREE_OPERAND (t, 0), depth); + + case tcc_constant: + case tcc_declaration: + case tcc_reference: + return integer_valued_real_single_p (t, depth); + + default: + break; + } + + switch (code) + { + case COND_EXPR: + case SSA_NAME: + return integer_valued_real_single_p (t, depth); + + case CALL_EXPR: + { + tree arg0 = (call_expr_nargs (t) > 0 + ? CALL_EXPR_ARG (t, 0) + : NULL_TREE); + tree arg1 = (call_expr_nargs (t) > 1 + ? CALL_EXPR_ARG (t, 1) + : NULL_TREE); + return integer_valued_real_call_p (get_callee_fndecl (t), + arg0, arg1, depth); + } + + default: + return integer_valued_real_invalid_p (t, depth); + } +} + /* Given the components of a binary expression CODE, TYPE, OP0 and OP1, attempt to fold the expression to a constant without modifying TYPE, OP0 or OP1. diff --git a/gcc/fold-const.h b/gcc/fold-const.h index 1bb68e4..8e49c98 100644 --- a/gcc/fold-const.h +++ b/gcc/fold-const.h @@ -139,6 +139,12 @@ extern bool tree_single_nonnegative_warnv_p (tree, bool *, int); extern bool tree_call_nonnegative_warnv_p (tree, tree, tree, tree, bool *, int); +extern bool integer_valued_real_unary_p (tree_code, tree, int); +extern bool integer_valued_real_binary_p (tree_code, tree, tree, int); +extern bool integer_valued_real_call_p (tree, tree, tree, int); +extern bool integer_valued_real_single_p (tree, int); +extern bool integer_valued_real_p (tree, int = 0); + extern bool fold_real_zero_addition_p (const_tree, const_tree, int); extern tree combine_comparisons (location_t, enum tree_code, enum tree_code, enum tree_code, tree, tree, tree); diff --git a/gcc/gimple-fold.c b/gcc/gimple-fold.c index 85ff018..1869c09 100644 --- a/gcc/gimple-fold.c +++ b/gcc/gimple-fold.c @@ -6266,3 +6266,91 @@ gimple_stmt_nonnegative_warnv_p (gimple *stmt, bool *strict_overflow_p, return false; } } + +/* Return true if the floating-point value computed by assignment STMT + is known to have an integer value. We also allow +Inf, -Inf and NaN + to be considered integer values. + + DEPTH is the current nesting depth of the query. */ + +static bool +gimple_assign_integer_valued_real_p (gimple *stmt, int depth) +{ + enum tree_code code = gimple_assign_rhs_code (stmt); + switch (get_gimple_rhs_class (code)) + { + case GIMPLE_UNARY_RHS: + return integer_valued_real_unary_p (gimple_assign_rhs_code (stmt), + gimple_assign_rhs1 (stmt), depth); + case GIMPLE_BINARY_RHS: + return integer_valued_real_binary_p (gimple_assign_rhs_code (stmt), + gimple_assign_rhs1 (stmt), + gimple_assign_rhs2 (stmt), depth); + case GIMPLE_TERNARY_RHS: + return false; + case GIMPLE_SINGLE_RHS: + return integer_valued_real_single_p (gimple_assign_rhs1 (stmt), depth); + case GIMPLE_INVALID_RHS: + break; + } + gcc_unreachable (); +} + +/* Return true if the floating-point value computed by call STMT is known + to have an integer value. We also allow +Inf, -Inf and NaN to be + considered integer values. + + DEPTH is the current nesting depth of the query. */ + +static bool +gimple_call_integer_valued_real_p (gimple *stmt, int depth) +{ + tree arg0 = (gimple_call_num_args (stmt) > 0 + ? gimple_call_arg (stmt, 0) + : NULL_TREE); + tree arg1 = (gimple_call_num_args (stmt) > 1 + ? gimple_call_arg (stmt, 1) + : NULL_TREE); + return integer_valued_real_call_p (gimple_call_fndecl (stmt), + arg0, arg1, depth); +} + +/* Return true if the floating-point result of phi STMT is known to have + an integer value. We also allow +Inf, -Inf and NaN to be considered + integer values. + + DEPTH is the current nesting depth of the query. */ + +static bool +gimple_phi_integer_valued_real_p (gimple *stmt, int depth) +{ + for (unsigned i = 0; i < gimple_phi_num_args (stmt); ++i) + { + tree arg = gimple_phi_arg_def (stmt, i); + if (!integer_valued_real_single_p (arg, depth + 1)) + return false; + } + return true; +} + +/* Return true if the floating-point value computed by STMT is known + to have an integer value. We also allow +Inf, -Inf and NaN to be + considered integer values. + + DEPTH is the current nesting depth of the query. */ + +bool +gimple_stmt_integer_valued_real_p (gimple *stmt, int depth) +{ + switch (gimple_code (stmt)) + { + case GIMPLE_ASSIGN: + return gimple_assign_integer_valued_real_p (stmt, depth); + case GIMPLE_CALL: + return gimple_call_integer_valued_real_p (stmt, depth); + case GIMPLE_PHI: + return gimple_phi_integer_valued_real_p (stmt, depth); + default: + return false; + } +} diff --git a/gcc/gimple-fold.h b/gcc/gimple-fold.h index 61edd69..9c24f77 100644 --- a/gcc/gimple-fold.h +++ b/gcc/gimple-fold.h @@ -120,6 +120,7 @@ gimple_convert_to_ptrofftype (gimple_seq *seq, tree op) } extern bool gimple_stmt_nonnegative_warnv_p (gimple *, bool *, int = 0); +extern bool gimple_stmt_integer_valued_real_p (gimple *, int = 0); /* In gimple-match.c. */ extern tree gimple_simplify (enum tree_code, tree, tree, diff --git a/gcc/match.pd b/gcc/match.pd index 98bb903..f7b7792 100644 --- a/gcc/match.pd +++ b/gcc/match.pd @@ -31,6 +31,7 @@ along with GCC; see the file COPYING3. If not see zerop CONSTANT_CLASS_P tree_expr_nonnegative_p + integer_valued_real_p integer_pow2p HONOR_NANS) @@ -71,6 +72,14 @@ along with GCC; see the file COPYING3. If not see BUILT_IN_COPYSIGN BUILT_IN_COPYSIGNL) (define_operator_list CABS BUILT_IN_CABSF BUILT_IN_CABS BUILT_IN_CABSL) +(define_operator_list TRUNC BUILT_IN_TRUNCF BUILT_IN_TRUNC BUILT_IN_TRUNCL) +(define_operator_list FLOOR BUILT_IN_FLOORF BUILT_IN_FLOOR BUILT_IN_FLOORL) +(define_operator_list CEIL BUILT_IN_CEILF BUILT_IN_CEIL BUILT_IN_CEILL) +(define_operator_list ROUND BUILT_IN_ROUNDF BUILT_IN_ROUND BUILT_IN_ROUNDL) +(define_operator_list NEARBYINT BUILT_IN_NEARBYINTF + BUILT_IN_NEARBYINT + BUILT_IN_NEARBYINTL) +(define_operator_list RINT BUILT_IN_RINTF BUILT_IN_RINT BUILT_IN_RINTL) /* Simplifications of operations with one constant operand and simplifications to constants or single values. */ @@ -2445,6 +2454,23 @@ along with GCC; see the file COPYING3. If not see (CABS (complex:c @0 real_zerop@1)) (abs @0)) +/* trunc(trunc(x)) -> trunc(x), etc. */ +(for fns (TRUNC FLOOR CEIL ROUND NEARBYINT RINT) + (simplify + (fns (fns @0)) + (fns @0))) +/* f(x) -> x if x is integer valued and f does nothing for such values. */ +(for fns (TRUNC FLOOR CEIL ROUND NEARBYINT) + (simplify + (fns integer_valued_real_p@0) + @0)) +/* Same for rint. We have to check flag_errno_math because + integer_valued_real_p accepts +Inf, -Inf and NaNs as integers. */ +(if (!flag_errno_math) + (simplify + (RINT integer_valued_real_p@0) + @0)) + /* Canonicalization of sequences of math builtins. These rules represent IL simplifications but are not necessarily optimizations. @@ -2554,6 +2580,57 @@ along with GCC; see the file COPYING3. If not see (mult (exps@1 (realpart @0)) (realpart (cexpis:type@2 (imagpart @0)))) (mult @1 (imagpart @2))))))) +(if (canonicalize_math_p ()) + /* floor(x) -> trunc(x) if x is nonnegative. */ + (for floors (FLOOR) + truncs (TRUNC) + (simplify + (floors tree_expr_nonnegative_p@0) + (truncs @0)))) + +(match double_value_p + @0 + (if (TYPE_MAIN_VARIANT (TREE_TYPE (@0)) == double_type_node))) +(for froms (BUILT_IN_TRUNCL + BUILT_IN_FLOORL + BUILT_IN_CEILL + BUILT_IN_ROUNDL + BUILT_IN_NEARBYINTL + BUILT_IN_RINTL) + tos (BUILT_IN_TRUNC + BUILT_IN_FLOOR + BUILT_IN_CEIL + BUILT_IN_ROUND + BUILT_IN_NEARBYINT + BUILT_IN_RINT) + /* truncl(extend(x)) -> extend(trunc(x)), etc., if x is a double. */ + (if (optimize && canonicalize_math_p ()) + (simplify + (froms (convert double_value_p@0)) + (convert (tos @0))))) + +(match float_value_p + @0 + (if (TYPE_MAIN_VARIANT (TREE_TYPE (@0)) == float_type_node))) +(for froms (BUILT_IN_TRUNCL BUILT_IN_TRUNC + BUILT_IN_FLOORL BUILT_IN_FLOOR + BUILT_IN_CEILL BUILT_IN_CEIL + BUILT_IN_ROUNDL BUILT_IN_ROUND + BUILT_IN_NEARBYINTL BUILT_IN_NEARBYINT + BUILT_IN_RINTL BUILT_IN_RINT) + tos (BUILT_IN_TRUNCF BUILT_IN_TRUNCF + BUILT_IN_FLOORF BUILT_IN_FLOORF + BUILT_IN_CEILF BUILT_IN_CEILF + BUILT_IN_ROUNDF BUILT_IN_ROUNDF + BUILT_IN_NEARBYINTF BUILT_IN_NEARBYINTF + BUILT_IN_RINTF BUILT_IN_RINTF) + /* truncl(extend(x)) and trunc(extend(x)) -> extend(truncf(x)), etc., + if x is a float. */ + (if (optimize && canonicalize_math_p ()) + (simplify + (froms (convert float_value_p@0)) + (convert (tos @0))))) + /* cproj(x) -> x if we're ignoring infinities. */ (simplify (CPROJ @0) diff --git a/gcc/testsuite/gcc.c-torture/execute/20030125-1.c b/gcc/testsuite/gcc.c-torture/execute/20030125-1.c index 60ede34..960552c3 100644 --- a/gcc/testsuite/gcc.c-torture/execute/20030125-1.c +++ b/gcc/testsuite/gcc.c-torture/execute/20030125-1.c @@ -1,5 +1,6 @@ /* Verify whether math functions are simplified. */ /* { dg-require-effective-target c99_runtime } */ +/* { dg-require-weak } */ double sin(double); double floor(double); float @@ -29,25 +30,25 @@ main() #endif return 0; } -__attribute__ ((noinline)) +__attribute__ ((weak)) double floor(double a) { abort (); } -__attribute__ ((noinline)) +__attribute__ ((weak)) float floorf(float a) { return a; } -__attribute__ ((noinline)) +__attribute__ ((weak)) double sin(double a) { return a; } -__attribute__ ((noinline)) +__attribute__ ((weak)) float sinf(float a) { diff --git a/gcc/testsuite/gcc.dg/builtins-57.c b/gcc/testsuite/gcc.dg/builtins-57.c index 361826c..18d40e8 100644 --- a/gcc/testsuite/gcc.dg/builtins-57.c +++ b/gcc/testsuite/gcc.dg/builtins-57.c @@ -1,5 +1,5 @@ /* { dg-do link } */ -/* { dg-options "-std=c99 -ffinite-math-only" } */ +/* { dg-options "-std=c99 -ffinite-math-only -O" } */ #include "builtins-config.h" diff --git a/gcc/testsuite/gcc.dg/torture/builtin-integral-1.c b/gcc/testsuite/gcc.dg/torture/builtin-integral-1.c index 522646d..f3c3338 100644 --- a/gcc/testsuite/gcc.dg/torture/builtin-integral-1.c +++ b/gcc/testsuite/gcc.dg/torture/builtin-integral-1.c @@ -10,6 +10,7 @@ that various math functions are marked const/pure and can be folded. */ /* { dg-options "-ffinite-math-only -fno-math-errno" } */ +/* { dg-skip-if "" { *-*-* } { "-O0" } { "" } } */ extern int link_failure (int);