From patchwork Fri Nov 24 16:53:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew MacLeod X-Patchwork-Id: 1868245 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=KcYhN2R3; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4ScLdy10kTz1ySk for ; Sat, 25 Nov 2023 03:53:26 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 45506385DC11 for ; Fri, 24 Nov 2023 16:53:23 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by sourceware.org (Postfix) with ESMTPS id 2825F385840E for ; Fri, 24 Nov 2023 16:53:09 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 2825F385840E Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 2825F385840E Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1700844791; cv=none; b=h8iwdwPoh5F3x5XlFuMSNvQvC26+I+T8nRV4a9evUmbeBfPBsg3DqbfsOcBdHmo539ma4pH+MP3qpgu7+6Mggx3UXrKNtoJ4RqzGvXayNSHPBhQe4y+/zXK/knhfoyOpk+d9wANag/EJ9Cnlp2+fU7jwbYr+WCgvzrytsPOWrZA= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1700844791; c=relaxed/simple; bh=4WrGR8uQaEJ7/jro5LglltgP/012IGxt53MU62qvx78=; h=DKIM-Signature:Message-ID:Date:MIME-Version:To:From:Subject; b=QmwulzbtH9QfhW4TBRoMqhUADH++P3QQMfqxF4kW+lrinKEsJKahpF86Ov/sz+cn+JPngFr9wcBEKAEy0zGXe/Qu28mJi7vECVlotK7BJsdLsT0rad0x//cAeqcCS9oIxxEczv6u+jIdmXQ0BMHBEU91oQEgVbCcz5AzWVuAx7s= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1700844788; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type; bh=un7N8/u/LSDGmKBGl0kQFibrnOTMdQ2BjMpGh5GZRl4=; b=KcYhN2R3D7bXylwZu/pvshb0xHbZlrLQNL4bPNlFcvSD+czw/r3per5pQlwiqRFTe/Bz25 CqFhivb6Hy48Gn2d9fSDsl62ury7z6ew1nDbaOVXFi6YFzql70ArtYnpEuYWWdz35xE6Xq 6B56QfQPXtd71Ct8s1oTC4QucuOIw3c= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-425-zEMAChYRMV-i-u2o2M5Tww-1; Fri, 24 Nov 2023 11:53:07 -0500 X-MC-Unique: zEMAChYRMV-i-u2o2M5Tww-1 Received: by mail-qk1-f197.google.com with SMTP id af79cd13be357-77d5f1b03a0so251089485a.3 for ; Fri, 24 Nov 2023 08:53:07 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700844786; x=1701449586; h=cc:subject:from:to:content-language:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Yj4K2XD3LnQyV/DOHmqDhlfDtm16CDv/18HpJZdJaZI=; b=ITA9l2ox1UCnY0TdkPRqwsVjqFPBgdE/q4YgpG0rz7V0svfVc4TYTxw8t3p7lHGPQr vl8uxtYXqpykCLXCW+kVP3v9NIj6NNp9E+YQZEtxS3eF8Gpywf3q51BWDxcCj2xvCTnY qcthIbWq1icG6ygLRT32YOlTVd7knMD5doG2VBF2+4EAa6LT4OTO95zoObKF3k9VC+up r5sCF78EIianYPPggQYzz8+DKFq3sk8RDoAYxzmP4WYSJqjd/smdi/oxz5qEOL33mduf 0fLJVQY7sqY0f1/QKt8wYe7glHauYfGCJ4DzxV0PhyCHfgQWWrKCLnPODC4rixn7LSg+ 887w== X-Gm-Message-State: AOJu0YzyE764o7bR+GY2saGoGYoHtFMZtTATTjWoXopE5HY3WoOOtaNF H1/Wd6Knf4NpFCwFmhi4KpHlp/NWaR5AyYJ0IDo3GXcGedW1eHKV+1NmeSRcc3lLO09V0vxRAmT 6Qr5dNu9iuhjs0TjKYyyuZbd5p4hXQ3chKtClHE8ecg8GCON86axi+OfpOZcsgyaqIzHho0tKo+ su9Q== X-Received: by 2002:a05:620a:278c:b0:76e:601d:a724 with SMTP id g12-20020a05620a278c00b0076e601da724mr4244974qkp.34.1700844786199; Fri, 24 Nov 2023 08:53:06 -0800 (PST) X-Google-Smtp-Source: AGHT+IGxQt+PKkko3MYbo/Kb4zQuAM7TNRC3zkTR87pSLZ6HWHMY0gwOkBWJXJuxgYC8pfs7/Gq0Hw== X-Received: by 2002:a05:620a:278c:b0:76e:601d:a724 with SMTP id g12-20020a05620a278c00b0076e601da724mr4244951qkp.34.1700844785837; Fri, 24 Nov 2023 08:53:05 -0800 (PST) Received: from ?IPV6:2607:fea8:51dd:2b00::115f? ([2607:fea8:51dd:2b00::115f]) by smtp.gmail.com with ESMTPSA id po7-20020a05620a384700b007788c1a81b6sm1343414qkn.46.2023.11.24.08.53.04 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 24 Nov 2023 08:53:05 -0800 (PST) Message-ID: <96de63b4-649f-46e5-9e28-47fe9a65b948@redhat.com> Date: Fri, 24 Nov 2023 11:53:04 -0500 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird To: gcc-patches From: Andrew MacLeod Subject: [PATCH] PR tree-optimization/111922 - Ensure wi_fold arguments match precisions. Cc: Jakub Jelinek X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US X-Spam-Status: No, score=-12.5 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org This problem here is that IPA is calling something like operator_minus with 2 operands, one with precision 32 (int) and one with precision 64 (pointer). There are various ways this can happen as mentioned in the PR. Regardless of whether IPA should be doing promoting types or not calling into range-ops,  range-ops does not support mis-matched precision in its arguments and it does not have to context to know what should be promoted/changed.   It is expected that the caller will ensure the operands are compatible. However, It is not really practical for the caller to know this with more context. Some operations support different precision or even types.. ie, shifts, or casts, etc.    It seems silly to require IPA to have a big switch to see what the tree code is and match up/promote/or bail if operands don't match... Range-ops routines probably shouldn't crash when this happens either, so this patch takes the conservative approach  and returns VARYING if there is a mismatch in the arguments precision. Fixes the problem and bootstraps on x86_64-pc-linux-gnu with no new regressions. OK for trunk? Andrew PS  If you would rather we trap in these cases and fix the callers, then I'd suggest we change these to checking_asserts instead.  I have also prepared a version that does a gcc_checking_assert instead of returning varying and done a bootstrap/testrun.    Of course, the callers will have to be changed.. It bootstraps fine in that variation too, and all the testcases (except  this one of course) pass.   Its clearly not a common occurrence, and my inclination is to apply this patch so we silently move on and simply don't provide useful range info.. that is all the callers in these cases are likely to do anyway... From f9cddb4cf931826f09197ed0fc2d6d64e6ccc3c3 Mon Sep 17 00:00:00 2001 From: Andrew MacLeod Date: Wed, 22 Nov 2023 17:24:42 -0500 Subject: [PATCH] Ensure wi_fold arguments match precisions. Return VARYING if any of the required operands or types to wi_fold do not match expected precisions. PR tree-optimization/111922 gcc/ * range-op.cc (operator_plus::wi_fold): Check that precisions of arguments and result type match. (operator_widen_plus_signed::wi_fold): Ditto. (operator_widen_plus_unsigned::wi_fold): Ditto. (operator_minus::wi_fold): Ditto. (operator_min::wi_fold): Ditto. (operator_max::wi_fold): Ditto. (operator_mult::wi_fold): Ditto. (operator_widen_mult_signed::wi_fold): Ditto. (operator_widen_mult_unsigned::wi_fold): Ditto. (operator_div::wi_fold): Ditto. (operator_lshift::wi_fold): Ditto. (operator_rshift::wi_fold): Ditto. (operator_bitwise_and::wi_fold): Ditto. (operator_bitwise_or::wi_fold): Ditto. (operator_bitwise_xor::wi_fold): Ditto. (operator_trunc_mod::wi_fold): Ditto. (operator_abs::wi_fold): Ditto. (operator_absu::wi_fold): Ditto. gcc/testsuite/ * gcc.dg/pr111922.c: New. --- gcc/range-op.cc | 119 ++++++++++++++++++++++++++++++++ gcc/testsuite/gcc.dg/pr111922.c | 29 ++++++++ 2 files changed, 148 insertions(+) create mode 100644 gcc/testsuite/gcc.dg/pr111922.c diff --git a/gcc/range-op.cc b/gcc/range-op.cc index 6137f2aeed3..ddb7339c075 100644 --- a/gcc/range-op.cc +++ b/gcc/range-op.cc @@ -1651,6 +1651,13 @@ operator_plus::wi_fold (irange &r, tree type, const wide_int &lh_lb, const wide_int &lh_ub, const wide_int &rh_lb, const wide_int &rh_ub) const { + // This operation requires all types to be the same precision. + if (lh_lb.get_precision () != rh_lb.get_precision () + || lh_lb.get_precision () != TYPE_PRECISION (type)) + { + r.set_varying (type); + return; + } wi::overflow_type ov_lb, ov_ub; signop s = TYPE_SIGN (type); wide_int new_lb = wi::add (lh_lb, rh_lb, s, &ov_lb); @@ -1797,6 +1804,12 @@ operator_widen_plus_signed::wi_fold (irange &r, tree type, const wide_int &rh_lb, const wide_int &rh_ub) const { + // This operation requires both sides to be the same precision. + if (lh_lb.get_precision () != rh_lb.get_precision ()) + { + r.set_varying (type); + return; + } wi::overflow_type ov_lb, ov_ub; signop s = TYPE_SIGN (type); @@ -1830,6 +1843,12 @@ operator_widen_plus_unsigned::wi_fold (irange &r, tree type, const wide_int &rh_lb, const wide_int &rh_ub) const { + // This operation requires both sides to be the same precision. + if (lh_lb.get_precision () != rh_lb.get_precision ()) + { + r.set_varying (type); + return; + } wi::overflow_type ov_lb, ov_ub; signop s = TYPE_SIGN (type); @@ -1858,6 +1877,14 @@ operator_minus::wi_fold (irange &r, tree type, const wide_int &lh_lb, const wide_int &lh_ub, const wide_int &rh_lb, const wide_int &rh_ub) const { + // This operation requires all ranges and types to be the same precision. + if (lh_lb.get_precision () != rh_lb.get_precision () + || lh_lb.get_precision () != TYPE_PRECISION (type)) + { + r.set_varying (type); + return; + } + wi::overflow_type ov_lb, ov_ub; signop s = TYPE_SIGN (type); wide_int new_lb = wi::sub (lh_lb, rh_ub, s, &ov_lb); @@ -2008,6 +2035,13 @@ operator_min::wi_fold (irange &r, tree type, const wide_int &lh_lb, const wide_int &lh_ub, const wide_int &rh_lb, const wide_int &rh_ub) const { + // This operation requires all ranges and types to be the same precision. + if (lh_lb.get_precision () != rh_lb.get_precision () + || lh_lb.get_precision () != TYPE_PRECISION (type)) + { + r.set_varying (type); + return; + } signop s = TYPE_SIGN (type); wide_int new_lb = wi::min (lh_lb, rh_lb, s); wide_int new_ub = wi::min (lh_ub, rh_ub, s); @@ -2027,6 +2061,13 @@ operator_max::wi_fold (irange &r, tree type, const wide_int &lh_lb, const wide_int &lh_ub, const wide_int &rh_lb, const wide_int &rh_ub) const { + // This operation requires all ranges and types to be the same precision. + if (lh_lb.get_precision () != rh_lb.get_precision () + || lh_lb.get_precision () != TYPE_PRECISION (type)) + { + r.set_varying (type); + return; + } signop s = TYPE_SIGN (type); wide_int new_lb = wi::max (lh_lb, rh_lb, s); wide_int new_ub = wi::max (lh_ub, rh_ub, s); @@ -2149,6 +2190,13 @@ operator_mult::wi_fold (irange &r, tree type, const wide_int &lh_lb, const wide_int &lh_ub, const wide_int &rh_lb, const wide_int &rh_ub) const { + // This operation requires all ranges and types to be the same precision. + if (lh_lb.get_precision () != rh_lb.get_precision () + || lh_lb.get_precision () != TYPE_PRECISION (type)) + { + r.set_varying (type); + return; + } if (TYPE_OVERFLOW_UNDEFINED (type)) { wi_cross_product (r, type, lh_lb, lh_ub, rh_lb, rh_ub); @@ -2253,6 +2301,12 @@ operator_widen_mult_signed::wi_fold (irange &r, tree type, const wide_int &rh_lb, const wide_int &rh_ub) const { + // This operation requires both sides to be the same precision. + if (lh_lb.get_precision () != rh_lb.get_precision ()) + { + r.set_varying (type); + return; + } signop s = TYPE_SIGN (type); wide_int lh_wlb = wide_int::from (lh_lb, wi::get_precision (lh_lb) * 2, SIGNED); @@ -2285,6 +2339,12 @@ operator_widen_mult_unsigned::wi_fold (irange &r, tree type, const wide_int &rh_lb, const wide_int &rh_ub) const { + // This operation requires both sides to be the same precision. + if (lh_lb.get_precision () != rh_lb.get_precision ()) + { + r.set_varying (type); + return; + } signop s = TYPE_SIGN (type); wide_int lh_wlb = wide_int::from (lh_lb, wi::get_precision (lh_lb) * 2, UNSIGNED); @@ -2364,6 +2424,13 @@ operator_div::wi_fold (irange &r, tree type, const wide_int &lh_lb, const wide_int &lh_ub, const wide_int &rh_lb, const wide_int &rh_ub) const { + // This operation requires all ranges and types to be the same precision. + if (lh_lb.get_precision () != rh_lb.get_precision () + || lh_lb.get_precision () != TYPE_PRECISION (type)) + { + r.set_varying (type); + return; + } const wide_int dividend_min = lh_lb; const wide_int dividend_max = lh_ub; const wide_int divisor_min = rh_lb; @@ -2555,6 +2622,12 @@ operator_lshift::wi_fold (irange &r, tree type, const wide_int &lh_lb, const wide_int &lh_ub, const wide_int &rh_lb, const wide_int &rh_ub) const { + // This operation requires left operand and type to be the same precision. + if (lh_lb.get_precision () != TYPE_PRECISION (type)) + { + r.set_varying (type); + return; + } signop sign = TYPE_SIGN (type); unsigned prec = TYPE_PRECISION (type); int overflow_pos = sign == SIGNED ? prec - 1 : prec; @@ -2807,6 +2880,12 @@ operator_rshift::wi_fold (irange &r, tree type, const wide_int &lh_lb, const wide_int &lh_ub, const wide_int &rh_lb, const wide_int &rh_ub) const { + // This operation requires left operand and type to be the same precision. + if (lh_lb.get_precision () != TYPE_PRECISION (type)) + { + r.set_varying (type); + return; + } wi_cross_product (r, type, lh_lb, lh_ub, rh_lb, rh_ub); } @@ -3319,6 +3398,13 @@ operator_bitwise_and::wi_fold (irange &r, tree type, const wide_int &rh_lb, const wide_int &rh_ub) const { + // This operation requires all ranges and types to be the same precision. + if (lh_lb.get_precision () != rh_lb.get_precision () + || lh_lb.get_precision () != TYPE_PRECISION (type)) + { + r.set_varying (type); + return; + } if (wi_optimize_and_or (r, BIT_AND_EXPR, type, lh_lb, lh_ub, rh_lb, rh_ub)) return; @@ -3629,6 +3715,13 @@ operator_bitwise_or::wi_fold (irange &r, tree type, const wide_int &rh_lb, const wide_int &rh_ub) const { + // This operation requires all ranges and types to be the same precision. + if (lh_lb.get_precision () != rh_lb.get_precision () + || lh_lb.get_precision () != TYPE_PRECISION (type)) + { + r.set_varying (type); + return; + } if (wi_optimize_and_or (r, BIT_IOR_EXPR, type, lh_lb, lh_ub, rh_lb, rh_ub)) return; @@ -3722,6 +3815,13 @@ operator_bitwise_xor::wi_fold (irange &r, tree type, const wide_int &rh_lb, const wide_int &rh_ub) const { + // This operation requires all ranges and types to be the same precision. + if (lh_lb.get_precision () != rh_lb.get_precision () + || lh_lb.get_precision () != TYPE_PRECISION (type)) + { + r.set_varying (type); + return; + } signop sign = TYPE_SIGN (type); wide_int maybe_nonzero_lh, mustbe_nonzero_lh; wide_int maybe_nonzero_rh, mustbe_nonzero_rh; @@ -3865,6 +3965,13 @@ operator_trunc_mod::wi_fold (irange &r, tree type, const wide_int &rh_lb, const wide_int &rh_ub) const { + // This operation requires all ranges and types to be the same precision. + if (lh_lb.get_precision () != rh_lb.get_precision () + || lh_lb.get_precision () != TYPE_PRECISION (type)) + { + r.set_varying (type); + return; + } wide_int new_lb, new_ub, tmp; signop sign = TYPE_SIGN (type); unsigned prec = TYPE_PRECISION (type); @@ -4151,6 +4258,12 @@ operator_abs::wi_fold (irange &r, tree type, const wide_int &rh_lb ATTRIBUTE_UNUSED, const wide_int &rh_ub ATTRIBUTE_UNUSED) const { + // This operation requires the operand and type to be the same precision. + if (lh_lb.get_precision () != TYPE_PRECISION (type)) + { + r.set_varying (type); + return; + } wide_int min, max; signop sign = TYPE_SIGN (type); unsigned prec = TYPE_PRECISION (type); @@ -4274,6 +4387,12 @@ operator_absu::wi_fold (irange &r, tree type, const wide_int &rh_lb ATTRIBUTE_UNUSED, const wide_int &rh_ub ATTRIBUTE_UNUSED) const { + // This operation requires the operand and type to be the same precision. + if (lh_lb.get_precision () != TYPE_PRECISION (type)) + { + r.set_varying (type); + return; + } wide_int new_lb, new_ub; // Pass through VR0 the easy cases. diff --git a/gcc/testsuite/gcc.dg/pr111922.c b/gcc/testsuite/gcc.dg/pr111922.c new file mode 100644 index 00000000000..4f429d741c7 --- /dev/null +++ b/gcc/testsuite/gcc.dg/pr111922.c @@ -0,0 +1,29 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -fno-tree-fre" } */ + +void f2 (void); +void f4 (int, int, int); +struct A { int a; }; +struct B { struct A *b; int c; } v; + +static int +f1 (x, y) + struct C *x; + struct A *y; +{ + (v.c = v.b->a) || (v.c = v.b->a); + f2 (); +} + +static void +f3 (int x, int y) +{ + int b = f1 (0, ~x); + f4 (0, 0, v.c); +} + +void +f5 (void) +{ + f3 (0, 0); +} -- 2.41.0