From patchwork Sun Sep 8 16:19:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Pinski X-Patchwork-Id: 1982221 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=quicinc.com header.i=@quicinc.com header.a=rsa-sha256 header.s=qcppdkim1 header.b=YCyvYMWk; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4X1wDK0Cb3z1y1S for ; Mon, 9 Sep 2024 02:20:15 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 9734D385DDF6 for ; Sun, 8 Sep 2024 16:20:12 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by sourceware.org (Postfix) with ESMTPS id 68B60385840C for ; Sun, 8 Sep 2024 16:19:52 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 68B60385840C Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=quicinc.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 68B60385840C Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1725812394; cv=none; b=wlVvPvvGEU/D+fRpfizNnjadiPzOKrK3O0kGC/PbMYdFXOq3Roa+b4objPOrNOKywwXTbo+i7NIywvwF8tv6vWqf1y2nr986k67aeZqxzQcwFrep+tYSJLGHnf15t3zrECo+TSY9VtQhQK7RxHRAmCQWbRYCGc/k8CJd/NsKQUQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1725812394; c=relaxed/simple; bh=3pQ/C1Arpk3aF+d0N+zYIeH7bg0xPNVqMFKiux43Oq0=; h=DKIM-Signature:From:To:Subject:Date:Message-ID:MIME-Version; b=onK5EfJ1WWZhpbozUT9GwvCMsEq8eBBFpD17zt3ty+2vAh6ceoIm6sT7vjq18aQAXZbod+oe+zwpyxtIa7uQloOFx6nUdh4nqwz8kd/522f4e5keT8QFWKdpW9Y3d6rRwME3yWSsDqD8hL5pdcfC0VMHzv2KcEib51v9UmZKCRA= ARC-Authentication-Results: i=1; server2.sourceware.org Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 488EvPUE002286 for ; Sun, 8 Sep 2024 16:19:51 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:message-id :mime-version:subject:to; s=qcppdkim1; bh=pbHl4t59DhrNJ5hOTirm86 CBdjzBKtptrxSkfWZT/TU=; b=YCyvYMWkaXhLwq5P1kQH8eXmOAfpLtDrxFOhpF 5++FDb1zdwcdnLnZKFKyFFjQX66AbKRE68YTWGyFZ3xZ7I45JY1zlsWHg4enlFFT +eyStD2LKdxsZdH3iOolmnSQ1c11huGZx41/P8nZth76bPSzNvKQNhg9pUXFRbVs TvNkCS/VpV1Wr0m4/5YZe3NIc9a6etpcDqMWAHVi/RmIsIajWMGltOashLSas3Nb lffKiNz7igosP9tDR+MB9FL6tD4/tw13eVcAzXeJucD6m/IdzMMYDTAUlQrTvtBC wR02Ln7GpLWClI02CdQK0JLML1FdCfdoCUiTScPNm3JXjD9A== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 41gybpgvw3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Sun, 08 Sep 2024 16:19:51 +0000 (GMT) Received: from nasanex01c.na.qualcomm.com (nasanex01c.na.qualcomm.com [10.45.79.139]) by NASANPPMTA04.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 488GJoqN019349 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Sun, 8 Sep 2024 16:19:50 GMT Received: from hu-apinski-lv.qualcomm.com (10.49.16.6) by nasanex01c.na.qualcomm.com (10.45.79.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Sun, 8 Sep 2024 09:19:50 -0700 From: Andrew Pinski To: CC: Andrew Pinski Subject: [PATCH] phiopt: Small refactoring/cleanup of non-ssa name case of factor_out_conditional_operation Date: Sun, 8 Sep 2024 09:19:37 -0700 Message-ID: <20240908161937.1186692-1-quic_apinski@quicinc.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01b.na.qualcomm.com (10.47.209.197) To nasanex01c.na.qualcomm.com (10.45.79.139) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: r78L-EykcDwlPyGANMFgas09dWlp-xT9 X-Proofpoint-GUID: r78L-EykcDwlPyGANMFgas09dWlp-xT9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 spamscore=0 adultscore=0 lowpriorityscore=0 mlxlogscore=903 clxscore=1015 bulkscore=0 impostorscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2408220000 definitions=main-2409080138 X-Spam-Status: No, score=-13.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org This small cleanup removes a redundant check for gimple_assign_cast_p and reformats based on that. Also changes the if statement that checks if the integral type and the check to see if the constant fits into the new type such that it returns null and reformats based on that. Also moves the check for has_single_use earlier so it is less complex still a cheaper check than some of the others (like the check on the integer side). This was noticed when adding a few new things to factor_out_conditional_operation but those are not ready to submit yet. Note there are no functional difference with this change. Bootstrapped and tested on x86_64-linux-gnu. gcc/ChangeLog: * tree-ssa-phiopt.cc (factor_out_conditional_operation): Move the has_single_use checks much earlier. Remove redundant check for gimple_assign_cast_p. Change around the check if the integral consts fits into the new type. Signed-off-by: Andrew Pinski --- gcc/tree-ssa-phiopt.cc | 122 ++++++++++++++++++++--------------------- 1 file changed, 60 insertions(+), 62 deletions(-) diff --git a/gcc/tree-ssa-phiopt.cc b/gcc/tree-ssa-phiopt.cc index 271a5d51f09..06ec5875722 100644 --- a/gcc/tree-ssa-phiopt.cc +++ b/gcc/tree-ssa-phiopt.cc @@ -265,6 +265,11 @@ factor_out_conditional_operation (edge e0, edge e1, gphi *phi, tree new_arg0 = arg0_op.ops[0]; tree new_arg1; + /* If arg0 have > 1 use, then this transformation actually increases + the number of expressions evaluated at runtime. */ + if (!has_single_use (arg0)) + return NULL; + if (TREE_CODE (arg1) == SSA_NAME) { /* Check if arg1 is an SSA_NAME. */ @@ -278,6 +283,11 @@ factor_out_conditional_operation (edge e0, edge e1, gphi *phi, if (arg1_op.operands_occurs_in_abnormal_phi ()) return NULL; + /* If arg1 have > 1 use, then this transformation actually increases + the number of expressions evaluated at runtime. */ + if (!has_single_use (arg1)) + return NULL; + /* Either arg1_def_stmt or arg0_def_stmt should be conditional. */ if (dominated_by_p (CDI_DOMINATORS, gimple_bb (phi), gimple_bb (arg0_def_stmt)) && dominated_by_p (CDI_DOMINATORS, @@ -295,80 +305,68 @@ factor_out_conditional_operation (edge e0, edge e1, gphi *phi, if (dominated_by_p (CDI_DOMINATORS, gimple_bb (phi), gimple_bb (arg0_def_stmt))) return NULL; - /* If arg1 is an INTEGER_CST, fold it to new type. */ - if (INTEGRAL_TYPE_P (TREE_TYPE (new_arg0)) - && (int_fits_type_p (arg1, TREE_TYPE (new_arg0)) - || (TYPE_PRECISION (TREE_TYPE (new_arg0)) + /* Only handle if arg1 is a INTEGER_CST and one that fits + into the new type or if it is the same precision. */ + if (!INTEGRAL_TYPE_P (TREE_TYPE (new_arg0)) + || !(int_fits_type_p (arg1, TREE_TYPE (new_arg0)) + || (TYPE_PRECISION (TREE_TYPE (new_arg0)) == TYPE_PRECISION (TREE_TYPE (arg1))))) + return NULL; + + /* For the INTEGER_CST case, we are just moving the + conversion from one place to another, which can often + hurt as the conversion moves further away from the + statement that computes the value. So, perform this + only if new_arg0 is an operand of COND_STMT, or + if arg0_def_stmt is the only non-debug stmt in + its basic block, because then it is possible this + could enable further optimizations (minmax replacement + etc.). See PR71016. + Note no-op conversions don't have this issue as + it will not generate any zero/sign extend in that case. */ + if ((TYPE_PRECISION (TREE_TYPE (new_arg0)) + != TYPE_PRECISION (TREE_TYPE (arg1))) + && new_arg0 != gimple_cond_lhs (cond_stmt) + && new_arg0 != gimple_cond_rhs (cond_stmt) + && gimple_bb (arg0_def_stmt) == e0->src) { - if (gimple_assign_cast_p (arg0_def_stmt)) + gsi = gsi_for_stmt (arg0_def_stmt); + gsi_prev_nondebug (&gsi); + if (!gsi_end_p (gsi)) { - /* For the INTEGER_CST case, we are just moving the - conversion from one place to another, which can often - hurt as the conversion moves further away from the - statement that computes the value. So, perform this - only if new_arg0 is an operand of COND_STMT, or - if arg0_def_stmt is the only non-debug stmt in - its basic block, because then it is possible this - could enable further optimizations (minmax replacement - etc.). See PR71016. - Note no-op conversions don't have this issue as - it will not generate any zero/sign extend in that case. */ - if ((TYPE_PRECISION (TREE_TYPE (new_arg0)) - != TYPE_PRECISION (TREE_TYPE (arg1))) - && new_arg0 != gimple_cond_lhs (cond_stmt) - && new_arg0 != gimple_cond_rhs (cond_stmt) - && gimple_bb (arg0_def_stmt) == e0->src) + gimple *stmt = gsi_stmt (gsi); + /* Ignore nops, predicates and labels. */ + if (gimple_code (stmt) == GIMPLE_NOP + || gimple_code (stmt) == GIMPLE_PREDICT + || gimple_code (stmt) == GIMPLE_LABEL) + ; + else if (gassign *assign = dyn_cast (stmt)) { - gsi = gsi_for_stmt (arg0_def_stmt); + tree lhs = gimple_assign_lhs (assign); + enum tree_code ass_code + = gimple_assign_rhs_code (assign); + if (ass_code != MAX_EXPR && ass_code != MIN_EXPR) + return NULL; + if (lhs != gimple_assign_rhs1 (arg0_def_stmt)) + return NULL; gsi_prev_nondebug (&gsi); if (!gsi_end_p (gsi)) - { - gimple *stmt = gsi_stmt (gsi); - /* Ignore nops, predicates and labels. */ - if (gimple_code (stmt) == GIMPLE_NOP - || gimple_code (stmt) == GIMPLE_PREDICT - || gimple_code (stmt) == GIMPLE_LABEL) - ; - else if (gassign *assign = dyn_cast (stmt)) - { - tree lhs = gimple_assign_lhs (assign); - enum tree_code ass_code - = gimple_assign_rhs_code (assign); - if (ass_code != MAX_EXPR && ass_code != MIN_EXPR) - return NULL; - if (lhs != gimple_assign_rhs1 (arg0_def_stmt)) - return NULL; - gsi_prev_nondebug (&gsi); - if (!gsi_end_p (gsi)) - return NULL; - } - else return NULL; - } - gsi = gsi_for_stmt (arg0_def_stmt); - gsi_next_nondebug (&gsi); - if (!gsi_end_p (gsi)) - return NULL; } - new_arg1 = fold_convert (TREE_TYPE (new_arg0), arg1); - - /* Drop the overlow that fold_convert might add. */ - if (TREE_OVERFLOW (new_arg1)) - new_arg1 = drop_tree_overflow (new_arg1); + else + return NULL; } - else + gsi = gsi_for_stmt (arg0_def_stmt); + gsi_next_nondebug (&gsi); + if (!gsi_end_p (gsi)) return NULL; } - else - return NULL; - } + new_arg1 = fold_convert (TREE_TYPE (new_arg0), arg1); - /* If arg0/arg1 have > 1 use, then this transformation actually increases - the number of expressions evaluated at runtime. */ - if (!has_single_use (arg0) - || (arg1_def_stmt && !has_single_use (arg1))) - return NULL; + /* Drop the overlow that fold_convert might add. */ + if (TREE_OVERFLOW (new_arg1)) + new_arg1 = drop_tree_overflow (new_arg1); + } /* If types of new_arg0 and new_arg1 are different bailout. */ if (!types_compatible_p (TREE_TYPE (new_arg0), TREE_TYPE (new_arg1)))