From patchwork Thu Nov 19 22:44:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew MacLeod X-Patchwork-Id: 1403345 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=gcc-patches-bounces@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gcc.gnu.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.a=rsa-sha256 header.s=default header.b=orcdHmmO; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4CcZWT3TGnz9sVJ for ; Fri, 20 Nov 2020 09:45:08 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 5E7F83844015; Thu, 19 Nov 2020 22:45:05 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 5E7F83844015 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1605825905; bh=yXjt7uJ9Y6JVF0QPgfiCj3plbhMBXAO9ReS4iVvniSI=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=orcdHmmO3qCjPBALXcLDlYXtTEDuCxjSHMf6IDEH2OaWCjF+8fKrojK7txm1/wChK vQrZoOaha7+Q3pslS6A7lVlv61kwRhz4Wz09ZbH1SSaLIdkPV/oJTkb7gm5Xys/2zA OpohyDjiItdriAWEsf0TvajkvPbDFX8TJwp4bQq8= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by sourceware.org (Postfix) with ESMTP id 23A10385141D for ; Thu, 19 Nov 2020 22:45:03 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 23A10385141D Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-298-cO1Us0oSMTu4xZp2iWbAhg-1; Thu, 19 Nov 2020 17:44:59 -0500 X-MC-Unique: cO1Us0oSMTu4xZp2iWbAhg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id AD6A0E758 for ; Thu, 19 Nov 2020 22:44:58 +0000 (UTC) Received: from [10.10.118.73] (ovpn-118-73.rdu2.redhat.com [10.10.118.73]) by smtp.corp.redhat.com (Postfix) with ESMTP id 668B94D for ; Thu, 19 Nov 2020 22:44:58 +0000 (UTC) To: gcc-patches Subject: [PATCH] Process only valid shift ranges. Message-ID: Date: Thu, 19 Nov 2020 17:44:57 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US X-Spam-Status: No, score=-12.8 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Andrew MacLeod via Gcc-patches From: Andrew MacLeod Reply-To: Andrew MacLeod Errors-To: gcc-patches-bounces@gcc.gnu.org Sender: "Gcc-patches" When shifting outside the valid range of [0, precision-1], we can choose to process just the valid ones since the rest is undefined. This allows us to produce results for x << [0,2][+INF, +INF] by discarding  the invalid ranges and processing just [0,2]. THis is particularly important when using a value that is limited by a branch, as demonstrated in the testcase. As Jakub suggested in the PR, we can mask the shift value with the full range of valid shift values, and use the result of that. If that is undefined, then we fall back to our old undefined behaviour. Bootstrapped on x86_64-pc-linux-gnu, no regressions.  Pushed. Andrew commit d0d8b5d83614d8f0d0e40c0520d4f40ffa01f8d9 Author: Andrew MacLeod Date: Thu Nov 19 17:41:30 2020 -0500 Process only valid shift ranges. When shifting outside the valid range of [0, precision-1], we can choose to process just the valid ones since the rest is undefined. this allows us to produce results for x << [0,2][+INF, +INF] by discarding the invalid ranges and processing just [0,2]. gcc/ PR tree-optimization/93781 * range-op.cc (get_shift_range): Rename from undefined_shift_range_check and now return valid shift ranges. (operator_lshift::fold_range): Use result from get_shift_range. (operator_rshift::fold_range): Ditto. gcc/testsuite/ * gcc.dg/tree-ssa/pr93781-1.c: New. * gcc.dg/tree-ssa/pr93781-2.c: New. * gcc.dg/tree-ssa/pr93781-3.c: New. diff --git a/gcc/range-op.cc b/gcc/range-op.cc index 6be60073d19..5bf37e1ad82 100644 --- a/gcc/range-op.cc +++ b/gcc/range-op.cc @@ -80,30 +80,25 @@ empty_range_varying (irange &r, tree type, return false; } -// Return TRUE if shifting by OP is undefined behavior, and set R to -// the appropriate range. +// Return false if shifting by OP is undefined behavior. Otherwise, return +// true and the range it is to be shifted by. This allows trimming out of +// undefined ranges, leaving only valid ranges if there are any. static inline bool -undefined_shift_range_check (irange &r, tree type, const irange &op) +get_shift_range (irange &r, tree type, const irange &op) { if (op.undefined_p ()) - { - r.set_undefined (); - return true; - } + return false; - // Shifting by any values outside [0..prec-1], gets undefined - // behavior from the shift operation. We cannot even trust - // SHIFT_COUNT_TRUNCATED at this stage, because that applies to rtl - // shifts, and the operation at the tree level may be widened. - if (wi::lt_p (op.lower_bound (), 0, TYPE_SIGN (op.type ())) - || wi::ge_p (op.upper_bound (), - TYPE_PRECISION (type), TYPE_SIGN (op.type ()))) - { - r.set_varying (type); - return true; - } - return false; + // Build valid range and intersect it with the shift range. + r = value_range (build_int_cst_type (op.type (), 0), + build_int_cst_type (op.type (), TYPE_PRECISION (type) - 1)); + r.intersect (op); + + // If there are no valid ranges in the shift range, returned false. + if (r.undefined_p ()) + return false; + return true; } // Return TRUE if 0 is within [WMIN, WMAX]. @@ -1465,13 +1460,20 @@ operator_lshift::fold_range (irange &r, tree type, const irange &op1, const irange &op2) const { - if (undefined_shift_range_check (r, type, op2)) - return true; + int_range_max shift_range; + if (!get_shift_range (shift_range, type, op2)) + { + if (op2.undefined_p ()) + r.set_undefined (); + else + r.set_varying (type); + return true; + } // Transform left shifts by constants into multiplies. - if (op2.singleton_p ()) + if (shift_range.singleton_p ()) { - unsigned shift = op2.lower_bound ().to_uhwi (); + unsigned shift = shift_range.lower_bound ().to_uhwi (); wide_int tmp = wi::set_bit_in_zero (shift, TYPE_PRECISION (type)); int_range<1> mult (type, tmp, tmp); @@ -1487,7 +1489,7 @@ operator_lshift::fold_range (irange &r, tree type, } else // Otherwise, invoke the generic fold routine. - return range_operator::fold_range (r, type, op1, op2); + return range_operator::fold_range (r, type, op1, shift_range); } void @@ -1709,11 +1711,17 @@ operator_rshift::fold_range (irange &r, tree type, const irange &op1, const irange &op2) const { - // Invoke the generic fold routine if not undefined.. - if (undefined_shift_range_check (r, type, op2)) - return true; + int_range_max shift; + if (!get_shift_range (shift, type, op2)) + { + if (op2.undefined_p ()) + r.set_undefined (); + else + r.set_varying (type); + return true; + } - return range_operator::fold_range (r, type, op1, op2); + return range_operator::fold_range (r, type, op1, shift); } void diff --git a/gcc/testsuite/gcc.dg/tree-ssa/pr93781-1.c b/gcc/testsuite/gcc.dg/tree-ssa/pr93781-1.c new file mode 100644 index 00000000000..5ebd8053965 --- /dev/null +++ b/gcc/testsuite/gcc.dg/tree-ssa/pr93781-1.c @@ -0,0 +1,18 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -fdump-tree-evrp" } */ + +void kill (void); + +void foo (unsigned int arg) +{ + int a = arg - 3; + unsigned int b = 4; + int x = 0x1 << arg; + + if (a < 0) + b = x; + + /* In the fullness of time, we will delete this call. */ + if (b >= 5) + kill ();; +} diff --git a/gcc/testsuite/gcc.dg/tree-ssa/pr93781-2.c b/gcc/testsuite/gcc.dg/tree-ssa/pr93781-2.c new file mode 100644 index 00000000000..c9b28783c12 --- /dev/null +++ b/gcc/testsuite/gcc.dg/tree-ssa/pr93781-2.c @@ -0,0 +1,17 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -fdump-tree-evrp" } */ + +void kill (void); + +void foo (unsigned int arg) +{ + unsigned int C000003FE = 4; + + if (arg + 1 < 4) // work for if (arg < 3) + C000003FE = 0x1 << arg; + + if (C000003FE >= 5) + kill (); +} + +/* { dg-final { scan-tree-dump-not "kill" "evrp" } } */ diff --git a/gcc/testsuite/gcc.dg/tree-ssa/pr93781-3.c b/gcc/testsuite/gcc.dg/tree-ssa/pr93781-3.c new file mode 100644 index 00000000000..e1d2be0ea7f --- /dev/null +++ b/gcc/testsuite/gcc.dg/tree-ssa/pr93781-3.c @@ -0,0 +1,21 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -fdump-tree-evrp" } */ + +void kill (void); + +void foo (unsigned int arg) +{ + int a = arg - 3; + unsigned int b = 4; + + if (a < 0) + { + int x = 0x1 << arg; + b = x; + } + + if (b >= 5) + kill (); +} + +/* { dg-final { scan-tree-dump-not "kill" "evrp" } } */