From patchwork Fri Sep 6 23:31:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Pinski X-Patchwork-Id: 1982104 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=quicinc.com header.i=@quicinc.com header.a=rsa-sha256 header.s=qcppdkim1 header.b=Mm19Zpvk; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4X0sv54kpjz1y1S for ; Sat, 7 Sep 2024 09:31:44 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 948AF3858431 for ; Fri, 6 Sep 2024 23:31:41 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by sourceware.org (Postfix) with ESMTPS id 280FC3858431 for ; Fri, 6 Sep 2024 23:31:17 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 280FC3858431 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=quicinc.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 280FC3858431 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1725665479; cv=none; b=HGwfP76MipOXej9s9SBdDVZwb420VIXguHajXT0EvfypTiyYeaypL8TuaseFRgTEY9YPAPU/WMGVMbGt5rSOz1qplZK9LT2TotEFqhssKJnUPfRcBz8eVpzYraTwdPrJHRCq406ZuOaQ/MMk7pLCuUOxIu0glGJ1wOdUGFqirLM= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1725665479; c=relaxed/simple; bh=r2qNuIkb9UatInuXhlFeJhAoVjiAvXPzGWMmm7ktqMs=; h=DKIM-Signature:From:To:Subject:Date:Message-ID:MIME-Version; b=pciUAsmQtyDTVdB21+zySLVtgjnsSrs7kjlbYOjRQO7nfOX5U8EMwt0nBJDzO+0qxaXL+zV7zvABOvc34gokMUwZHihBqeIHjBYTEM/Qw9xtQTj0yL9vTCXJysPPWT9907g8WC+HxnVD0Eeqd1sKWkgOBrxFYyR7EWSPPzSEpdM= ARC-Authentication-Results: i=1; server2.sourceware.org Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 486LeAGk021584 for ; Fri, 6 Sep 2024 23:31:16 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:message-id :mime-version:subject:to; s=qcppdkim1; bh=a7heHwxv5O3Ool2wpq+Qs7 d7HZ3fdydnoPNXuA7ysr8=; b=Mm19Zpvkjv1s/nODjZs+H0RsupIyqkIOqcHhiQ 6eCs0wTX0EwsE7o290OM1nQi0UKe+HAQCaltW0ylNKBMp4hgdbSK1KAngWCDj/rF zKPxsBKaE1LKl2CgMh9EP1Kf4nxAJ1QljIwfDSPS9xKKFTNPTaOHQAIcvNgd866E MAjE2OPm/b7W1h4f4VqKQXOeypVHenDO9cCYUYt1MKkLGeFoIFT6+8id/rFFdwHA s3RDJc3BwjKCDW/fSXN5gyjLK1Ijf7qALt+pxSnF0X9eeyJ1JeTwu8ys54VuNd+9 hNqrJURAGYIe8MNGZ8u5TYG5V2yd5jaUyrhnO9k7xj6kB56Q== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 41fhwu3k54-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Fri, 06 Sep 2024 23:31:15 +0000 (GMT) Received: from nasanex01c.na.qualcomm.com (nasanex01c.na.qualcomm.com [10.45.79.139]) by NASANPPMTA04.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 486NVFGU013118 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Fri, 6 Sep 2024 23:31:15 GMT Received: from hu-apinski-lv.qualcomm.com (10.49.16.6) by nasanex01c.na.qualcomm.com (10.45.79.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Fri, 6 Sep 2024 16:31:14 -0700 From: Andrew Pinski To: CC: Andrew Pinski Subject: [PATCH] gimple-fold: Move optimizing memcpy to memset to fold_stmt from fab Date: Fri, 6 Sep 2024 16:31:04 -0700 Message-ID: <20240906233104.257204-1-quic_apinski@quicinc.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01a.na.qualcomm.com (10.47.209.196) To nasanex01c.na.qualcomm.com (10.45.79.139) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: n_j1mq1mosK9DG5oikEE6H98X3Ow8tit X-Proofpoint-ORIG-GUID: n_j1mq1mosK9DG5oikEE6H98X3Ow8tit X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_08,2024-09-06_01,2024-09-02_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 lowpriorityscore=0 malwarescore=0 mlxlogscore=999 adultscore=0 priorityscore=1501 bulkscore=0 mlxscore=0 phishscore=0 suspectscore=0 impostorscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2408220000 definitions=main-2409060174 X-Spam-Status: No, score=-13.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org I noticed this folding inside fab could be done else where and could even improve inlining decisions and a few other things so let's move it to fold_stmt. It also fixes PR 116601 because places which call fold_stmt already have to deal with the stmt becoming a non-throw statement. For the fix for PR 116601 on the branches should be the original patch rather than a backport of this one. Bootstrapped and tested on x86_64-linux-gnu. PR tree-optimization/116601 gcc/ChangeLog: * gimple-fold.cc (optimize_memcpy_to_memset): Move from tree-ssa-ccp.cc and rename. Also return true if the optimization happened. (gimple_fold_builtin_memory_op): Call optimize_memcpy_to_memset. (fold_stmt_1): Call optimize_memcpy_to_memset for load/store copies. * tree-ssa-ccp.cc (optimize_memcpy): Delete. (pass_fold_builtins::execute): Remove code that calls optimize_memcpy. gcc/testsuite/ChangeLog: * gcc.dg/pr78408-1.c: Adjust dump scan to match where the optimization now happens. * g++.dg/torture/except-2.C: New test. Signed-off-by: Andrew Pinski --- gcc/gimple-fold.cc | 134 ++++++++++++++++++++++++ gcc/testsuite/g++.dg/torture/except-2.C | 18 ++++ gcc/testsuite/gcc.dg/pr78408-1.c | 5 +- gcc/tree-ssa-ccp.cc | 132 +---------------------- 4 files changed, 156 insertions(+), 133 deletions(-) create mode 100644 gcc/testsuite/g++.dg/torture/except-2.C diff --git a/gcc/gimple-fold.cc b/gcc/gimple-fold.cc index 2746fcfe314..942de7720fd 100644 --- a/gcc/gimple-fold.cc +++ b/gcc/gimple-fold.cc @@ -894,6 +894,121 @@ size_must_be_zero_p (tree size) return vr.zero_p (); } +/* Optimize + a = {}; + b = a; + into + a = {}; + b = {}; + Similarly for memset (&a, ..., sizeof (a)); instead of a = {}; + and/or memcpy (&b, &a, sizeof (a)); instead of b = a; */ + +static bool +optimize_memcpy_to_memset (gimple_stmt_iterator *gsip, tree dest, tree src, tree len) +{ + gimple *stmt = gsi_stmt (*gsip); + if (gimple_has_volatile_ops (stmt)) + return false; + + tree vuse = gimple_vuse (stmt); + if (vuse == NULL || TREE_CODE (vuse) != SSA_NAME) + return false; + + gimple *defstmt = SSA_NAME_DEF_STMT (vuse); + tree src2 = NULL_TREE, len2 = NULL_TREE; + poly_int64 offset, offset2; + tree val = integer_zero_node; + if (gimple_store_p (defstmt) + && gimple_assign_single_p (defstmt) + && TREE_CODE (gimple_assign_rhs1 (defstmt)) == CONSTRUCTOR + && !gimple_clobber_p (defstmt)) + src2 = gimple_assign_lhs (defstmt); + else if (gimple_call_builtin_p (defstmt, BUILT_IN_MEMSET) + && TREE_CODE (gimple_call_arg (defstmt, 0)) == ADDR_EXPR + && TREE_CODE (gimple_call_arg (defstmt, 1)) == INTEGER_CST) + { + src2 = TREE_OPERAND (gimple_call_arg (defstmt, 0), 0); + len2 = gimple_call_arg (defstmt, 2); + val = gimple_call_arg (defstmt, 1); + /* For non-0 val, we'd have to transform stmt from assignment + into memset (only if dest is addressable). */ + if (!integer_zerop (val) && is_gimple_assign (stmt)) + src2 = NULL_TREE; + } + + if (src2 == NULL_TREE) + return false; + + if (len == NULL_TREE) + len = (TREE_CODE (src) == COMPONENT_REF + ? DECL_SIZE_UNIT (TREE_OPERAND (src, 1)) + : TYPE_SIZE_UNIT (TREE_TYPE (src))); + if (len2 == NULL_TREE) + len2 = (TREE_CODE (src2) == COMPONENT_REF + ? DECL_SIZE_UNIT (TREE_OPERAND (src2, 1)) + : TYPE_SIZE_UNIT (TREE_TYPE (src2))); + if (len == NULL_TREE + || !poly_int_tree_p (len) + || len2 == NULL_TREE + || !poly_int_tree_p (len2)) + return false; + + src = get_addr_base_and_unit_offset (src, &offset); + src2 = get_addr_base_and_unit_offset (src2, &offset2); + if (src == NULL_TREE + || src2 == NULL_TREE + || maybe_lt (offset, offset2)) + return false; + + if (!operand_equal_p (src, src2, 0)) + return false; + + /* [ src + offset2, src + offset2 + len2 - 1 ] is set to val. + Make sure that + [ src + offset, src + offset + len - 1 ] is a subset of that. */ + if (maybe_gt (wi::to_poly_offset (len) + (offset - offset2), + wi::to_poly_offset (len2))) + return false; + + if (dump_file && (dump_flags & TDF_DETAILS)) + { + fprintf (dump_file, "Simplified\n "); + print_gimple_stmt (dump_file, stmt, 0, dump_flags); + fprintf (dump_file, "after previous\n "); + print_gimple_stmt (dump_file, defstmt, 0, dump_flags); + } + + /* For simplicity, don't change the kind of the stmt, + turn dest = src; into dest = {}; and memcpy (&dest, &src, len); + into memset (&dest, val, len); + In theory we could change dest = src into memset if dest + is addressable (maybe beneficial if val is not 0), or + memcpy (&dest, &src, len) into dest = {} if len is the size + of dest, dest isn't volatile. */ + if (is_gimple_assign (stmt)) + { + tree ctor = build_constructor (TREE_TYPE (dest), NULL); + gimple_assign_set_rhs_from_tree (gsip, ctor); + update_stmt (stmt); + } + else /* If stmt is memcpy, transform it into memset. */ + { + gcall *call = as_a (stmt); + tree fndecl = builtin_decl_implicit (BUILT_IN_MEMSET); + gimple_call_set_fndecl (call, fndecl); + gimple_call_set_fntype (call, TREE_TYPE (fndecl)); + gimple_call_set_arg (call, 1, val); + update_stmt (stmt); + } + + if (dump_file && (dump_flags & TDF_DETAILS)) + { + fprintf (dump_file, "into\n "); + print_gimple_stmt (dump_file, stmt, 0, dump_flags); + } + return true; +} + /* Fold function call to builtin mem{{,p}cpy,move}. Try to detect and diagnose (otherwise undefined) overlapping copies without preventing folding. When folded, GCC guarantees that overlapping memcpy has @@ -1171,6 +1286,15 @@ gimple_fold_builtin_memory_op (gimple_stmt_iterator *gsi, return false; } + /* Try to optimize the memcpy to memset if src and dest are addresses. */ + if (code != BUILT_IN_MEMPCPY + && TREE_CODE (dest) == ADDR_EXPR + && TREE_CODE (src) == ADDR_EXPR + && TREE_CODE (len) == INTEGER_CST + && optimize_memcpy_to_memset (gsi, TREE_OPERAND (dest, 0), + TREE_OPERAND (src, 0), len)) + return true; + if (!tree_fits_shwi_p (len)) return false; if (!srctype @@ -6475,6 +6599,16 @@ fold_stmt_1 (gimple_stmt_iterator *gsi, bool inplace, tree (*valueize) (tree), { case GIMPLE_ASSIGN: { + if (gimple_assign_load_p (stmt) && gimple_store_p (stmt)) + { + if (optimize_memcpy_to_memset (gsi, gimple_assign_lhs (stmt), + gimple_assign_rhs1 (stmt), + /* len = */NULL_TREE)) + { + changed = true; + break; + } + } /* Try to canonicalize for boolean-typed X the comparisons X == 0, X == 1, X != 0, and X != 1. */ if (gimple_assign_rhs_code (stmt) == EQ_EXPR diff --git a/gcc/testsuite/g++.dg/torture/except-2.C b/gcc/testsuite/g++.dg/torture/except-2.C new file mode 100644 index 00000000000..d896937a118 --- /dev/null +++ b/gcc/testsuite/g++.dg/torture/except-2.C @@ -0,0 +1,18 @@ +// { dg-do compile } +// { dg-additional-options "-fexceptions -fnon-call-exceptions" } +// PR tree-optimization/116601 + +struct RefitOption { + char subtype; + int string; +} n; +void h(RefitOption); +void k(RefitOption *__val) +{ + try { + *__val = RefitOption{}; + RefitOption __trans_tmp_2 = *__val; + h(__trans_tmp_2); + } + catch(...){} +} diff --git a/gcc/testsuite/gcc.dg/pr78408-1.c b/gcc/testsuite/gcc.dg/pr78408-1.c index dc9870ac6af..a2d9306848b 100644 --- a/gcc/testsuite/gcc.dg/pr78408-1.c +++ b/gcc/testsuite/gcc.dg/pr78408-1.c @@ -1,7 +1,8 @@ /* PR c/78408 */ /* { dg-do compile { target size32plus } } */ -/* { dg-options "-O2 -fdump-tree-fab1-details" } */ -/* { dg-final { scan-tree-dump-times "after previous" 17 "fab1" } } */ +/* { dg-options "-O2 -fdump-tree-ccp-details -fdump-tree-forwprop-details" } */ +/* { dg-final { scan-tree-dump-times "after previous" 1 "ccp1" } } */ +/* { dg-final { scan-tree-dump-times "after previous" 16 "forwprop1" } } */ struct S { char a[33]; }; struct T { char a[65536]; }; diff --git a/gcc/tree-ssa-ccp.cc b/gcc/tree-ssa-ccp.cc index 44711018e0e..47b2ce9441e 100644 --- a/gcc/tree-ssa-ccp.cc +++ b/gcc/tree-ssa-ccp.cc @@ -4159,120 +4159,6 @@ optimize_atomic_op_fetch_cmp_0 (gimple_stmt_iterator *gsip, return true; } -/* Optimize - a = {}; - b = a; - into - a = {}; - b = {}; - Similarly for memset (&a, ..., sizeof (a)); instead of a = {}; - and/or memcpy (&b, &a, sizeof (a)); instead of b = a; */ - -static void -optimize_memcpy (gimple_stmt_iterator *gsip, tree dest, tree src, tree len) -{ - gimple *stmt = gsi_stmt (*gsip); - if (gimple_has_volatile_ops (stmt)) - return; - - tree vuse = gimple_vuse (stmt); - if (vuse == NULL) - return; - - gimple *defstmt = SSA_NAME_DEF_STMT (vuse); - tree src2 = NULL_TREE, len2 = NULL_TREE; - poly_int64 offset, offset2; - tree val = integer_zero_node; - if (gimple_store_p (defstmt) - && gimple_assign_single_p (defstmt) - && TREE_CODE (gimple_assign_rhs1 (defstmt)) == CONSTRUCTOR - && !gimple_clobber_p (defstmt)) - src2 = gimple_assign_lhs (defstmt); - else if (gimple_call_builtin_p (defstmt, BUILT_IN_MEMSET) - && TREE_CODE (gimple_call_arg (defstmt, 0)) == ADDR_EXPR - && TREE_CODE (gimple_call_arg (defstmt, 1)) == INTEGER_CST) - { - src2 = TREE_OPERAND (gimple_call_arg (defstmt, 0), 0); - len2 = gimple_call_arg (defstmt, 2); - val = gimple_call_arg (defstmt, 1); - /* For non-0 val, we'd have to transform stmt from assignment - into memset (only if dest is addressable). */ - if (!integer_zerop (val) && is_gimple_assign (stmt)) - src2 = NULL_TREE; - } - - if (src2 == NULL_TREE) - return; - - if (len == NULL_TREE) - len = (TREE_CODE (src) == COMPONENT_REF - ? DECL_SIZE_UNIT (TREE_OPERAND (src, 1)) - : TYPE_SIZE_UNIT (TREE_TYPE (src))); - if (len2 == NULL_TREE) - len2 = (TREE_CODE (src2) == COMPONENT_REF - ? DECL_SIZE_UNIT (TREE_OPERAND (src2, 1)) - : TYPE_SIZE_UNIT (TREE_TYPE (src2))); - if (len == NULL_TREE - || !poly_int_tree_p (len) - || len2 == NULL_TREE - || !poly_int_tree_p (len2)) - return; - - src = get_addr_base_and_unit_offset (src, &offset); - src2 = get_addr_base_and_unit_offset (src2, &offset2); - if (src == NULL_TREE - || src2 == NULL_TREE - || maybe_lt (offset, offset2)) - return; - - if (!operand_equal_p (src, src2, 0)) - return; - - /* [ src + offset2, src + offset2 + len2 - 1 ] is set to val. - Make sure that - [ src + offset, src + offset + len - 1 ] is a subset of that. */ - if (maybe_gt (wi::to_poly_offset (len) + (offset - offset2), - wi::to_poly_offset (len2))) - return; - - if (dump_file && (dump_flags & TDF_DETAILS)) - { - fprintf (dump_file, "Simplified\n "); - print_gimple_stmt (dump_file, stmt, 0, dump_flags); - fprintf (dump_file, "after previous\n "); - print_gimple_stmt (dump_file, defstmt, 0, dump_flags); - } - - /* For simplicity, don't change the kind of the stmt, - turn dest = src; into dest = {}; and memcpy (&dest, &src, len); - into memset (&dest, val, len); - In theory we could change dest = src into memset if dest - is addressable (maybe beneficial if val is not 0), or - memcpy (&dest, &src, len) into dest = {} if len is the size - of dest, dest isn't volatile. */ - if (is_gimple_assign (stmt)) - { - tree ctor = build_constructor (TREE_TYPE (dest), NULL); - gimple_assign_set_rhs_from_tree (gsip, ctor); - update_stmt (stmt); - } - else /* If stmt is memcpy, transform it into memset. */ - { - gcall *call = as_a (stmt); - tree fndecl = builtin_decl_implicit (BUILT_IN_MEMSET); - gimple_call_set_fndecl (call, fndecl); - gimple_call_set_fntype (call, TREE_TYPE (fndecl)); - gimple_call_set_arg (call, 1, val); - update_stmt (stmt); - } - - if (dump_file && (dump_flags & TDF_DETAILS)) - { - fprintf (dump_file, "into\n "); - print_gimple_stmt (dump_file, stmt, 0, dump_flags); - } -} - /* A simple pass that attempts to fold all builtin functions. This pass is run after we've propagated as many constants as we can. */ @@ -4322,11 +4208,8 @@ pass_fold_builtins::execute (function *fun) stmt = gsi_stmt (i); - if (gimple_code (stmt) != GIMPLE_CALL) + if (gimple_code (stmt) != GIMPLE_CALL) { - if (gimple_assign_load_p (stmt) && gimple_store_p (stmt)) - optimize_memcpy (&i, gimple_assign_lhs (stmt), - gimple_assign_rhs1 (stmt), NULL_TREE); gsi_next (&i); continue; } @@ -4532,19 +4415,6 @@ pass_fold_builtins::execute (function *fun) false); break; - case BUILT_IN_MEMCPY: - if (gimple_call_builtin_p (stmt, BUILT_IN_NORMAL) - && TREE_CODE (gimple_call_arg (stmt, 0)) == ADDR_EXPR - && TREE_CODE (gimple_call_arg (stmt, 1)) == ADDR_EXPR - && TREE_CODE (gimple_call_arg (stmt, 2)) == INTEGER_CST) - { - tree dest = TREE_OPERAND (gimple_call_arg (stmt, 0), 0); - tree src = TREE_OPERAND (gimple_call_arg (stmt, 1), 0); - tree len = gimple_call_arg (stmt, 2); - optimize_memcpy (&i, dest, src, len); - } - break; - case BUILT_IN_VA_START: case BUILT_IN_VA_END: case BUILT_IN_VA_COPY: