diff mbox

[tree-optimization] : [3 of 3]: Boolify compares & more

Message ID CAEwic4YDkH+pLLZvmVBNv9s2-5n6s2cztSsXpj10hjO7G0AFQg@mail.gmail.com
State New
Headers show

Commit Message

Kai Tietz July 8, 2011, 4:17 p.m. UTC
Hello,

This is the reworked patch, It fixes vrp to handle bitwise one-bit
precision typed operations
and to handle some type hoisting cases, Some cases can't be handled as
long as vrp doesn't
allows to insert new statements in folding pass.
To have in first pass better match, VRP uses for stmt-folding now for each BB
first -> last stepping.  I extended for this function
substitute_and_fold function by an
new argument, which indicates if scanning within BB shall be done from
first to last,
or from last to first. I removed in this new patch the part of
re-doing stmt-fold pass, as
this is no longer necessary by changing folding direction within BB.

This modification of scanning direction plus type-cast handling allows
it to remove dom-dump
from the testcase tree-ssa/vrp47.c, as all cases are handled now
within vrp itself.

Bootstrapped and regression tested for all standard-languages (plus
Ada and Obj-C++) on host x86_64-pc-linux-gnu.

Ok for apply?

Regards,
Kai

ChangeLog gcc/

2011-07-08  Kai Tietz  <ktietz@redhat.com>

	* tree-ssa-ccp.c (ccp_finalize): Add new
	argument for substitute_and_fold.
	* tree-ssa-copy.c (fini_copy_prop): Likewise.
	* tree-ssa-propagate.h (substitute_and_fold):
	Likewise.
	* tree-ssa-propagate.c (substitute_and_fold):
	Likewise.
	* tree-vrp.c (vrp_finalize): Likewise.
	(extract_range_from_binary_expr): Add handling
	for BIT_IOR_EXPR, BIT_AND_EXPR, and BIT_NOT_EXPR.
	(register_edge_assert_for_1): Add handling for 1-bit
	BIT_IOR_EXPR and BIT_NOT_EXPR.
	(register_edge_assert_for): Add handling for 1-bit
	BIT_IOR_EXPR.
	(ssa_name_get_inner_ssa_name_p): New helper function.
	(ssa_name_get_cast_to_p): New helper function.
	(simplify_truth_ops_using_ranges): Handle prefixed
	cast instruction for result, and add support for one
	bit precision BIT_IOR_EXPR, BIT_AND_EXPR, BIT_XOR_EXPR,
	and BIT_NOT_EXPR.
	(simplify_stmt_using_ranges): Add handling for one bit
	precision BIT_IOR_EXPR, BIT_AND_EXPR, BIT_XOR_EXPR,
	and BIT_NOT_EXPR.

ChangeLog gcc/testsuite

2011-07-08  Kai Tietz  <ktietz@redhat.com>

	* gcc.dg/tree-ssa/vrp47.c: Remove dom-output
	and adjust testcase for vrp output analysis.

Index: gcc/gcc/testsuite/gcc.dg/tree-ssa/vrp47.c
===================================================================
--- gcc.orig/gcc/testsuite/gcc.dg/tree-ssa/vrp47.c	2011-01-11
20:36:16.000000000 +0100
+++ gcc/gcc/testsuite/gcc.dg/tree-ssa/vrp47.c	2011-07-08
17:49:55.016847200 +0200
@@ -4,7 +4,7 @@
    jumps when evaluating an && condition.  VRP is not able to optimize
    this.  */
 /* { dg-do compile { target { ! "mips*-*-* s390*-*-*  avr-*-*
mn10300-*-*" } } } */
-/* { dg-options "-O2 -fdump-tree-vrp -fdump-tree-dom" } */
+/* { dg-options "-O2 -fdump-tree-vrp" } */
 /* { dg-options "-O2 -fdump-tree-vrp -fdump-tree-dom -march=i586" {
target { i?86-*-* && ilp32 } } } */

 int h(int x, int y)
@@ -36,13 +36,10 @@ int f(int x)
    0 or 1.  */
 /* { dg-final { scan-tree-dump-times "\[xy\]\[^ \]* !=" 0 "vrp1" } } */

-/* This one needs more copy propagation that only happens in dom1.  */
-/* { dg-final { scan-tree-dump-times "x\[^ \]* & y" 1 "dom1" } } */
-/* { dg-final { scan-tree-dump-times "x\[^ \]* & y" 1 "vrp1" { xfail
*-*-* } } } */
+/* { dg-final { scan-tree-dump-times "x\[^ \]* & y" 1 "vrp1" } } */

 /* These two are fully simplified by VRP.  */
 /* { dg-final { scan-tree-dump-times "x\[^ \]* \[|\] y" 1 "vrp1" } } */
 /* { dg-final { scan-tree-dump-times "x\[^ \]* \\^ 1" 1 "vrp1" } } */

 /* { dg-final { cleanup-tree-dump "vrp\[0-9\]" } } */
-/* { dg-final { cleanup-tree-dump "dom\[0-9\]" } } */

Comments

Michael Matz July 8, 2011, 4:29 p.m. UTC | #1
Hi,

On Fri, 8 Jul 2011, Kai Tietz wrote:

> This is the reworked patch, It fixes vrp to handle bitwise one-bit 
> precision typed operations and to handle some type hoisting cases, Some 
> cases can't be handled as long as vrp doesn't allows to insert new 
> statements in folding pass. To have in first pass better match, VRP uses 
> for stmt-folding now for each BB first -> last stepping.  I extended for 
> this function substitute_and_fold function by an new argument, which 
> indicates if scanning within BB shall be done from first to last, or 
> from last to first. I removed in this new patch the part of re-doing 
> stmt-fold pass, as this is no longer necessary by changing folding 
> direction within BB.

You still add BIT_IOR_EXPR for POINTER_TYPE_P, which seems strange.  All 
these test for TYPE_PRECISION being 1 (and then handling BIT_IOR/AND_EXPR 
like TRUTH_IOR/AND_EXPR) aren't necessary if you extend the general 
handling for BIT_IOR_EXPR (for instance) to deal with not only constant 
1, but simply handling all-ones constants specially.  That is replace 
integer_onep with integer_all_onesp at certain places.

Because also for wider than 1-bit precision it's the case that we can 
infer usefull ranges out of "VARYING | all-ones".

Certainly the special casing on 1-bit is ugly.  Work towards making 
tree-vrp more lean and handling cases more general instead of piling 
special case over special case.


Ciao,
Michael.
Kai Tietz July 8, 2011, 4:45 p.m. UTC | #2
2011/7/8 Michael Matz <matz@suse.de>:
> Hi,
>
> On Fri, 8 Jul 2011, Kai Tietz wrote:
>
>> This is the reworked patch, It fixes vrp to handle bitwise one-bit
>> precision typed operations and to handle some type hoisting cases, Some
>> cases can't be handled as long as vrp doesn't allows to insert new
>> statements in folding pass. To have in first pass better match, VRP uses
>> for stmt-folding now for each BB first -> last stepping.  I extended for
>> this function substitute_and_fold function by an new argument, which
>> indicates if scanning within BB shall be done from first to last, or
>> from last to first. I removed in this new patch the part of re-doing
>> stmt-fold pass, as this is no longer necessary by changing folding
>> direction within BB.
>
> You still add BIT_IOR_EXPR for POINTER_TYPE_P, which seems strange.
Yes, I am aware of that. I added old behavior for BIT_IOR_EXPR here as otherwise
it would run into the gcc_unreachable case. As here we want to say
varying ... Well, even this
is not necessarily true.  As an bitwise-binary-op with different width
on both arguments sides might
 still have a smaller range then the type itself.
Eg: x[0..255] | y[0..1024] has a limitted range in result of max.

As we handle here value-ranges and not bit-masks for VR-inspection,
there are some limitations, too.
Eg: (x[mask:0xf0] | y[mask:0x7]) & 8 is for sure zero.

>  All
> these test for TYPE_PRECISION being 1 (and then handling BIT_IOR/AND_EXPR
> like TRUTH_IOR/AND_EXPR) aren't necessary if you extend the general
> handling for BIT_IOR_EXPR (for instance) to deal with not only constant
> 1, but simply handling all-ones constants specially.  That is replace
> integer_onep with integer_all_onesp at certain places.
Well, in some cases this is true, but checking for precision has the
advantages that signed cases are covered here and we need not to
compare range min/max.  Nevertheless some assumptions on combinations
are only true for one-bit precision types.

> Because also for wider than 1-bit precision it's the case that we can
> infer usefull ranges out of "VARYING | all-ones".
Yes, this might have advantages on some inspections.

Kai
diff mbox

Patch

Index: gcc/gcc/tree-ssa-ccp.c
===================================================================
--- gcc.orig/gcc/tree-ssa-ccp.c	2011-06-30 11:30:12.000000000 +0200
+++ gcc/gcc/tree-ssa-ccp.c	2011-07-08 17:20:22.378750800 +0200
@@ -880,7 +880,8 @@  ccp_finalize (void)

   /* Perform substitutions based on the known constant values.  */
   something_changed = substitute_and_fold (get_constant_value,
-					   ccp_fold_stmt, true);
+					   ccp_fold_stmt, true,
+					   true);

   free (const_val);
   const_val = NULL;
Index: gcc/gcc/tree-ssa-copy.c
===================================================================
--- gcc.orig/gcc/tree-ssa-copy.c	2011-06-17 11:52:51.000000000 +0200
+++ gcc/gcc/tree-ssa-copy.c	2011-07-08 17:19:32.464412500 +0200
@@ -778,7 +778,7 @@  fini_copy_prop (void)

   /* Don't do DCE if we have loops.  That's the simplest way to not
      destroy the scev cache.  */
-  substitute_and_fold (get_value, NULL, !current_loops);
+  substitute_and_fold (get_value, NULL, !current_loops, true);

   free (copy_of);
 }
Index: gcc/gcc/tree-ssa-propagate.c
===================================================================
--- gcc.orig/gcc/tree-ssa-propagate.c	2011-05-12 21:01:07.000000000 +0200
+++ gcc/gcc/tree-ssa-propagate.c	2011-07-08 17:18:26.921089500 +0200
@@ -979,12 +979,15 @@  replace_phi_args_in (gimple phi, ssa_pro

    DO_DCE is true if trivially dead stmts can be removed.

+   SCAN_BB_REVERSE is true, if the statements within a BB shall be from
+   last to first element.  Otherwise we scan from first to last element.
+
    Return TRUE when something changed.  */

 bool
 substitute_and_fold (ssa_prop_get_value_fn get_value_fn,
 		     ssa_prop_fold_stmt_fn fold_fn,
-		     bool do_dce)
+		     bool do_dce, bool scan_bb_reverse)
 {
   basic_block bb;
   bool something_changed = false;
@@ -1059,9 +1062,10 @@  substitute_and_fold (ssa_prop_get_value_
 	for (i = gsi_start_phis (bb); !gsi_end_p (i); gsi_next (&i))
 	  replace_phi_args_in (gsi_stmt (i), get_value_fn);

-      /* Propagate known values into stmts.  Do a backward walk to expose
-	 more trivially deletable stmts.  */
-      for (i = gsi_last_bb (bb); !gsi_end_p (i);)
+      /* Propagate known values into stmts.  Do a backward walk if
+         scan_bb_reverse is true. In some case it exposes
+	 more trivially deletable stmts to walk backward.  */
+      for (i = (scan_bb_reverse ? gsi_last_bb (bb) : gsi_start_bb
(bb)); !gsi_end_p (i);)
 	{
           bool did_replace;
 	  gimple stmt = gsi_stmt (i);
@@ -1070,7 +1074,10 @@  substitute_and_fold (ssa_prop_get_value_
 	  gimple_stmt_iterator oldi;

 	  oldi = i;
-	  gsi_prev (&i);
+	  if (scan_bb_reverse)
+	    gsi_prev (&i);
+	  else
+	    gsi_next (&i);

 	  /* Ignore ASSERT_EXPRs.  They are used by VRP to generate
 	     range information for names and they are discarded
Index: gcc/gcc/tree-ssa-propagate.h
===================================================================
--- gcc.orig/gcc/tree-ssa-propagate.h	2010-09-09 16:09:48.000000000 +0200
+++ gcc/gcc/tree-ssa-propagate.h	2011-07-08 17:12:42.847397700 +0200
@@ -74,6 +74,7 @@  bool valid_gimple_rhs_p (tree);
 void move_ssa_defining_stmt_for_defs (gimple, gimple);
 bool update_call_from_tree (gimple_stmt_iterator *, tree);
 bool stmt_makes_single_store (gimple);
-bool substitute_and_fold (ssa_prop_get_value_fn, ssa_prop_fold_stmt_fn, bool);
+bool substitute_and_fold (ssa_prop_get_value_fn, ssa_prop_fold_stmt_fn, bool,
+			  bool);

 #endif /* _TREE_SSA_PROPAGATE_H  */
Index: gcc/gcc/tree-vrp.c
===================================================================
--- gcc.orig/gcc/tree-vrp.c	2011-07-08 14:07:42.000000000 +0200
+++ gcc/gcc/tree-vrp.c	2011-07-08 17:44:20.981930200 +0200
@@ -2232,6 +2232,7 @@  extract_range_from_binary_expr (value_ra
      some cases.  */
   if (code != BIT_AND_EXPR
       && code != TRUTH_AND_EXPR
+      && code != BIT_IOR_EXPR
       && code != TRUTH_OR_EXPR
       && code != TRUNC_DIV_EXPR
       && code != FLOOR_DIV_EXPR
@@ -2291,6 +2292,8 @@  extract_range_from_binary_expr (value_ra
 	  else
 	    set_value_range_to_varying (vr);
 	}
+      else if (code == BIT_IOR_EXPR)
+        set_value_range_to_varying (vr);
       else
 	gcc_unreachable ();

@@ -2300,11 +2303,13 @@  extract_range_from_binary_expr (value_ra
   /* For integer ranges, apply the operation to each end of the
      range and see what we end up with.  */
   if (code == TRUTH_AND_EXPR
-      || code == TRUTH_OR_EXPR)
+      || code == TRUTH_OR_EXPR
+      || ((code == BIT_AND_EXPR || code == BIT_IOR_EXPR)
+          && TYPE_PRECISION (TREE_TYPE (op1)) == 1))
     {
       /* If one of the operands is zero, we know that the whole
 	 expression evaluates zero.  */
-      if (code == TRUTH_AND_EXPR
+      if ((code == TRUTH_AND_EXPR || code == BIT_AND_EXPR)
 	  && ((vr0.type == VR_RANGE
 	       && integer_zerop (vr0.min)
 	       && integer_zerop (vr0.max))
@@ -2317,7 +2322,7 @@  extract_range_from_binary_expr (value_ra
 	}
       /* If one of the operands is one, we know that the whole
 	 expression evaluates one.  */
-      else if (code == TRUTH_OR_EXPR
+      else if ((code == TRUTH_OR_EXPR || code == BIT_IOR_EXPR)
 	       && ((vr0.type == VR_RANGE
 		    && integer_onep (vr0.min)
 		    && integer_onep (vr0.max))
@@ -2809,7 +2814,7 @@  extract_range_from_unary_expr (value_ran
      cannot easily determine a resulting range.  */
   if (code == FIX_TRUNC_EXPR
       || code == FLOAT_EXPR
-      || code == BIT_NOT_EXPR
+      || (code == BIT_NOT_EXPR && TYPE_PRECISION (type) != 1)
       || code == CONJ_EXPR)
     {
       /* We can still do constant propagation here.  */
@@ -3976,7 +3981,9 @@  build_assert_expr_for (tree cond, tree v
       tree a = build2 (ASSERT_EXPR, TREE_TYPE (v), v, cond);
       assertion = gimple_build_assign (n, a);
     }
-  else if (TREE_CODE (cond) == TRUTH_NOT_EXPR)
+  else if (TREE_CODE (cond) == TRUTH_NOT_EXPR
+  	   || (TREE_CODE (cond) == BIT_NOT_EXPR
+  	       && TYPE_PRECISION (TREE_TYPE (cond)) == 1))
     {
       /* Given !V, build the assignment N = false.  */
       tree op0 = TREE_OPERAND (cond, 0);
@@ -4531,7 +4538,9 @@  register_edge_assert_for_1 (tree op, enu
       retval |= register_edge_assert_for_1 (gimple_assign_rhs2 (op_def),
 					    code, e, bsi);
     }
-  else if (gimple_assign_rhs_code (op_def) == TRUTH_NOT_EXPR)
+  else if (gimple_assign_rhs_code (op_def) == TRUTH_NOT_EXPR
+  	   || (gimple_assign_rhs_code (op_def) == BIT_NOT_EXPR
+  	       && TYPE_PRECISION (TREE_TYPE (op)) == 1))
     {
       /* Recurse, flipping CODE.  */
       code = invert_tree_comparison (code, false);
@@ -4617,6 +4626,9 @@  register_edge_assert_for (tree name, edg

       if (is_gimple_assign (def_stmt)
 	  && (gimple_assign_rhs_code (def_stmt) == TRUTH_OR_EXPR
+	      || (gimple_assign_rhs_code (def_stmt) == BIT_IOR_EXPR
+		  && INTEGRAL_TYPE_P (TREE_TYPE (name))
+		  && TYPE_PRECISION (TREE_TYPE (name)) == 1)
 	      /* For BIT_IOR_EXPR only if NAME == 0 both operands have
 		 necessarily zero value.  */
 	      || (comp_code == EQ_EXPR
@@ -6747,19 +6759,96 @@  varying:
   return SSA_PROP_VARYING;
 }

+/* Returns operand1 of ssa-name with SSA_NAME as code, Otherwise it
+   returns NULL_TREE.  */
+static tree
+ssa_name_get_inner_ssa_name_p (tree op)
+{
+  gimple stmt;
+
+  if (TREE_CODE (op) != SSA_NAME
+      || !is_gimple_assign (SSA_NAME_DEF_STMT (op)))
+    return NULL_TREE;
+  stmt = SSA_NAME_DEF_STMT (op);
+  if (gimple_assign_rhs_code (stmt) != SSA_NAME)
+    return NULL_TREE;
+  return gimple_assign_rhs1 (stmt);
+}
+
+/* Returns operand of cast operation, if OP is a type-conversion. Otherwise
+   return NULL_TREE.  */
+static tree
+ssa_name_get_cast_to_p (tree op)
+{
+  gimple stmt;
+
+  if (TREE_CODE (op) != SSA_NAME
+      || !is_gimple_assign (SSA_NAME_DEF_STMT (op)))
+    return NULL_TREE;
+  stmt = SSA_NAME_DEF_STMT (op);
+  if (!CONVERT_EXPR_CODE_P (gimple_assign_rhs_code (stmt)))
+    return NULL_TREE;
+  return gimple_assign_rhs1 (stmt);
+}
+
 /* Simplify boolean operations if the source is known
    to be already a boolean.  */
 static bool
 simplify_truth_ops_using_ranges (gimple_stmt_iterator *gsi, gimple stmt)
 {
   enum tree_code rhs_code = gimple_assign_rhs_code (stmt);
+  gimple stmt2 = stmt;
   tree val = NULL;
-  tree op0, op1;
+  tree op0, op1, cop0, cop1;
   value_range_t *vr;
   bool sop = false;
   bool need_conversion;
+  location_t loc = gimple_location (stmt);

   op0 = gimple_assign_rhs1 (stmt);
+  op1 = NULL_TREE;
+
+  /* Handle cases with prefixed type-cast.  */
+  if (CONVERT_EXPR_CODE_P (rhs_code)
+      && INTEGRAL_TYPE_P (TREE_TYPE (op0))
+      && TREE_CODE (op0) == SSA_NAME
+      && is_gimple_assign (SSA_NAME_DEF_STMT (op0))
+      && INTEGRAL_TYPE_P (TREE_TYPE (gimple_assign_lhs (stmt))))
+    {
+      stmt2 = SSA_NAME_DEF_STMT (op0);
+      op0 = gimple_assign_rhs1 (stmt2);
+      if (!INTEGRAL_TYPE_P (TREE_TYPE (op0)))
+	return false;
+      rhs_code = gimple_assign_rhs_code (stmt2);
+      if (rhs_code != BIT_NOT_EXPR && rhs_code != TRUTH_NOT_EXPR
+	  && rhs_code != TRUTH_AND_EXPR && rhs_code != BIT_AND_EXPR
+	  && rhs_code != TRUTH_OR_EXPR && rhs_code != BIT_IOR_EXPR
+	  && rhs_code != TRUTH_XOR_EXPR && rhs_code != BIT_XOR_EXPR
+	  && rhs_code != NE_EXPR && rhs_code != EQ_EXPR)
+	return false;
+      if (rhs_code == BIT_AND_EXPR || rhs_code == BIT_IOR_EXPR
+	  || rhs_code == BIT_XOR_EXPR || rhs_code == TRUTH_AND_EXPR
+	  || rhs_code == TRUTH_OR_EXPR || rhs_code == TRUTH_XOR_EXPR
+	  || rhs_code == NE_EXPR || rhs_code == EQ_EXPR)
+	op1 = gimple_assign_rhs2 (stmt2);
+      if (gimple_has_location (stmt2))
+        loc = gimple_location (stmt2);
+    }
+  else if (CONVERT_EXPR_CODE_P (rhs_code))
+    return false;
+  else if (rhs_code == BIT_AND_EXPR || rhs_code == BIT_IOR_EXPR
+      || rhs_code == BIT_XOR_EXPR || rhs_code == TRUTH_AND_EXPR
+      || rhs_code == TRUTH_OR_EXPR || rhs_code == TRUTH_XOR_EXPR
+      || rhs_code == NE_EXPR || rhs_code == EQ_EXPR)
+    op1 = gimple_assign_rhs2 (stmt);
+
+  /* ~X is only equivalent of !X, if type-precision is one and X has
+     an integral type.  */
+  if (rhs_code == BIT_NOT_EXPR
+      && (!INTEGRAL_TYPE_P (TREE_TYPE (op0))
+	  || TYPE_PRECISION (TREE_TYPE (op0)) != 1))
+    return false;
+
   if (TYPE_PRECISION (TREE_TYPE (op0)) != 1)
     {
       if (TREE_CODE (op0) != SSA_NAME)
@@ -6775,22 +6864,100 @@  simplify_truth_ops_using_ranges (gimple_
         return false;
     }

-  if (rhs_code == TRUTH_NOT_EXPR)
+  if (op1 && TREE_CODE (op1) != INTEGER_CST
+      && TYPE_PRECISION (TREE_TYPE (op1)) != 1)
+    {
+      vr = get_value_range (op1);
+      val = compare_range_with_value (GE_EXPR, vr, integer_zero_node, &sop);
+      if (!val || !integer_onep (val))
+	return false;
+
+      val = compare_range_with_value (LE_EXPR, vr, integer_one_node, &sop);
+      if (!val || !integer_onep (val))
+	return false;
+    }
+
+  need_conversion =
+    !useless_type_conversion_p (TREE_TYPE (gimple_assign_lhs (stmt)),
+			        TREE_TYPE (op0));
+
+  /* As comparisons X != 0 getting folded by prior pass to (bool) X,
+     but X == 0 might be not folded for none boolean type of X
+     to (bool) (X ^ 1).
+     So for bitwise-binary operations we have three cases to handle:
+     a) ((bool) X) op ((bool) Y)
+     b) ((bool) X) op (Y == 0) OR (X == 0) op ((bool) Y)
+     c) (X == 0) op (Y == 0)
+     The later two cases can't be handled for now, as vr tables
+     would need to be adjusted.  */
+  if (need_conversion
+      && (rhs_code == BIT_XOR_EXPR
+	  || rhs_code == BIT_AND_EXPR
+	  || rhs_code == BIT_IOR_EXPR)
+      && TREE_CODE (op1) == SSA_NAME && TREE_CODE (op0) == SSA_NAME)
+    {
+      cop0 = ssa_name_get_cast_to_p (op0);
+      cop1 = ssa_name_get_cast_to_p (op1);
+      if (!cop0 || !cop1)
+        /* We would need an new statment for cases b and c, and we can't
+           due vr table, so bail out.  */
+        return false;
+
+      if (!INTEGRAL_TYPE_P (TREE_TYPE (cop0))
+	  || !types_compatible_p (TREE_TYPE (cop0), TREE_TYPE (cop1)))
+	return false;
+      need_conversion =
+	!useless_type_conversion_p (TREE_TYPE (gimple_assign_lhs (stmt)),
+				    TREE_TYPE (cop0));
+      if (need_conversion)
+	return false;
+      op0 = cop0;
+      op1 = cop1;
+
+      /* We need to re-check if value ranges for new operands
+         for 1-bit precision/range.  */
+      if (TYPE_PRECISION (TREE_TYPE (op0)) != 1)
+	{
+	  if (TREE_CODE (op0) != SSA_NAME)
+	    return false;
+	  vr = get_value_range (op0);
+
+	  val = compare_range_with_value (GE_EXPR, vr, integer_zero_node, &sop);
+	  if (!val || !integer_onep (val))
+	    return false;
+
+	  val = compare_range_with_value (LE_EXPR, vr, integer_one_node, &sop);
+	  if (!val || !integer_onep (val))
+	    return false;
+	}
+
+      if (op1 && TYPE_PRECISION (TREE_TYPE (op1)) != 1)
+	{
+	  vr = get_value_range (op1);
+	  val = compare_range_with_value (GE_EXPR, vr, integer_zero_node, &sop);
+	  if (!val || !integer_onep (val))
+	    return false;
+
+	  val = compare_range_with_value (LE_EXPR, vr, integer_one_node, &sop);
+	  if (!val || !integer_onep (val))
+	    return false;
+	}
+    }
+  else if (rhs_code == TRUTH_NOT_EXPR || rhs_code == BIT_NOT_EXPR)
     {
       rhs_code = NE_EXPR;
       op1 = build_int_cst (TREE_TYPE (op0), 1);
     }
   else
     {
-      op1 = gimple_assign_rhs2 (stmt);
-
       /* Reduce number of cases to handle.  */
       if (is_gimple_min_invariant (op1))
 	{
           /* Exclude anything that should have been already folded.  */
 	  if (rhs_code != EQ_EXPR
 	      && rhs_code != NE_EXPR
-	      && rhs_code != TRUTH_XOR_EXPR)
+	      && rhs_code != TRUTH_XOR_EXPR
+	      && rhs_code != BIT_XOR_EXPR)
 	    return false;

 	  if (!integer_zerop (op1)
@@ -6810,18 +6977,6 @@  simplify_truth_ops_using_ranges (gimple_
 	  /* Punt on A == B as there is no BIT_XNOR_EXPR.  */
 	  if (rhs_code == EQ_EXPR)
 	    return false;
-
-	  if (TYPE_PRECISION (TREE_TYPE (op1)) != 1)
-	    {
-	      vr = get_value_range (op1);
-	      val = compare_range_with_value (GE_EXPR, vr, integer_zero_node, &sop);
-	      if (!val || !integer_onep (val))
-	        return false;
-
-	      val = compare_range_with_value (LE_EXPR, vr, integer_one_node, &sop);
-	      if (!val || !integer_onep (val))
-	        return false;
-	    }
 	}
     }

@@ -6838,7 +6993,8 @@  simplify_truth_ops_using_ranges (gimple_
         warning_at (location, OPT_Wstrict_overflow,
 	            _("assuming signed overflow does not occur when "
 		      "simplifying && or || to & or |"));
-      else
+      else if (rhs_code != BIT_AND_EXPR && rhs_code != BIT_IOR_EXPR
+	       && rhs_code != BIT_XOR_EXPR)
         warning_at (location, OPT_Wstrict_overflow,
 	            _("assuming signed overflow does not occur when "
 		      "simplifying ==, != or ! to identity or ^"));
@@ -6859,16 +7015,21 @@  simplify_truth_ops_using_ranges (gimple_
     case TRUTH_AND_EXPR:
       rhs_code = BIT_AND_EXPR;
       break;
+    case BIT_AND_EXPR:
+      break;
     case TRUTH_OR_EXPR:
       rhs_code = BIT_IOR_EXPR;
+    case BIT_IOR_EXPR:
       break;
     case TRUTH_XOR_EXPR:
+    case BIT_XOR_EXPR:
     case NE_EXPR:
       if (integer_zerop (op1))
 	{
 	  gimple_assign_set_rhs_with_ops (gsi,
 					  need_conversion ? NOP_EXPR : SSA_NAME,
 					  op0, NULL);
+	  gimple_set_location (stmt, loc);
 	  update_stmt (gsi_stmt (*gsi));
 	  return true;
 	}
@@ -6879,10 +7040,20 @@  simplify_truth_ops_using_ranges (gimple_
       gcc_unreachable ();
     }

+  /* We can't insert here new expression as otherwise
+     tracked vr tables getting out of bounds.  */
   if (need_conversion)
     return false;

+  /* Reduce here SSA_NAME -> SSA_NAME.  */
+  while ((cop0 = ssa_name_get_inner_ssa_name_p (op0)) != NULL_TREE)
+    op0 = cop0;
+
+  while ((cop1 = ssa_name_get_inner_ssa_name_p (op1)) != NULL_TREE)
+    op1 = cop1;
+
   gimple_assign_set_rhs_with_ops (gsi, rhs_code, op0, op1);
+  gimple_set_location (stmt, loc);
   update_stmt (gsi_stmt (*gsi));
   return true;
 }
@@ -7390,6 +7561,7 @@  simplify_stmt_using_ranges (gimple_stmt_
 	{
 	case EQ_EXPR:
 	case NE_EXPR:
+	case BIT_NOT_EXPR:
 	case TRUTH_NOT_EXPR:
 	case TRUTH_AND_EXPR:
 	case TRUTH_OR_EXPR:
@@ -7425,13 +7597,21 @@  simplify_stmt_using_ranges (gimple_stmt_
 	     if all the bits being cleared are already cleared or
 	     all the bits being set are already set.  */
 	  if (INTEGRAL_TYPE_P (TREE_TYPE (rhs1)))
-	    return simplify_bit_ops_using_ranges (gsi, stmt);
+	    {
+	      if (simplify_truth_ops_using_ranges (gsi, stmt))
+		return true;
+	      return simplify_bit_ops_using_ranges (gsi, stmt);
+	    }
 	  break;

 	CASE_CONVERT:
 	  if (TREE_CODE (rhs1) == SSA_NAME
 	      && INTEGRAL_TYPE_P (TREE_TYPE (rhs1)))
-	    return simplify_conversion_using_ranges (stmt);
+	    {
+	      if (simplify_truth_ops_using_ranges (gsi, stmt))
+		return true;
+	      return simplify_conversion_using_ranges (stmt);
+	    }
 	  break;

 	default:
@@ -7686,7 +7866,7 @@  vrp_finalize (void)
     }

   substitute_and_fold (op_with_constant_singleton_value_range,
-		       vrp_fold_stmt, false);
+		       vrp_fold_stmt, false, false);

   if (warn_array_bounds)
     check_all_array_refs ();