From patchwork Thu Oct 17 02:42:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Pinski X-Patchwork-Id: 1998373 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=quicinc.com header.i=@quicinc.com header.a=rsa-sha256 header.s=qcppdkim1 header.b=hpdCpfVr; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XTXFw5Th8z1xw2 for ; Thu, 17 Oct 2024 13:43:32 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 3838F3858CD1 for ; Thu, 17 Oct 2024 02:43:30 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by sourceware.org (Postfix) with ESMTPS id 9F7783858D20 for ; Thu, 17 Oct 2024 02:42:18 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 9F7783858D20 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=quicinc.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 9F7783858D20 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1729132957; cv=none; b=S9ziYzK0jlZS/dwfE+Ck6SUUEMZxTI54h1qIZJuQNH85L4unFAwGa4b3d2P36xUwALdjZP4xpVOg1JIua+c/8I0RC7rSPx6NYOh8wjmJvZ8MEFfN/BBo6r5fEc24NhLRXZ2Sieruh/E+xsvyNDB3+ROta4XlSlQ1ZwihH3qvL68= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1729132957; c=relaxed/simple; bh=NI3I/VUHcsi3fCxxP35VXHoDcAIPqpFKRxcn2/yuKvo=; h=DKIM-Signature:From:To:Subject:Date:Message-ID:MIME-Version; b=Hle+75Z1u1ie5K6jdhb12a6Jpr+z6rYH/WeaQgJcOITAaSiLV8R+uuk1Yiy6GMpZzPfQWBbbRZdyvds/nNnnl8sLQPYUE1+Bk7CCDIz52q+ZhuOpWuM12b7/epplVnZEjwI4x1R4sMj3lKpDfgBdeY1rpnGJJ69U6a9NHOeM6vE= ARC-Authentication-Results: i=1; server2.sourceware.org Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49GGX59S020072 for ; Thu, 17 Oct 2024 02:42:18 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:message-id :mime-version:subject:to; s=qcppdkim1; bh=QSRHOUebp5oKatEEUFge3Q J0Pn+LfuqQjnIpEPDGOVA=; b=hpdCpfVrX2pA39nPkX5B65FDOwnYyo4QbdE2XS cvM4t3SiwXyp+2JlhPus2c6J23FcEBvp/tMqJfsktB5CJNtmASUa+znuhTxvUAil XoWxvfyceYMAMEKjSmhyfIx9/DphCypEprF791Qa7gvK+DgVIxrGmGabaxN2kJrl fAXrqwyQwG3EcIiedDKDYnR56boZOZVSn0O8B0IMQw6uUbTLlv6lGKSFr60GdE2s H5WfC+OuG0gDrJismS0Ctd0qpiS4257GI7QPPXWX8uTHtay9+XoO1bub2Zv88+Lc ibeXcnj5PmmDfzciHx5s7pd6sCTMSJOaDyEPeKMgPfKZD5wg== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 429m0feqhx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Thu, 17 Oct 2024 02:42:17 +0000 (GMT) Received: from nasanex01c.na.qualcomm.com (nasanex01c.na.qualcomm.com [10.45.79.139]) by NASANPPMTA02.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 49H2gGJp003633 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Thu, 17 Oct 2024 02:42:16 GMT Received: from hu-apinski-lv.qualcomm.com (10.49.16.6) by nasanex01c.na.qualcomm.com (10.45.79.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Wed, 16 Oct 2024 19:42:16 -0700 From: Andrew Pinski To: CC: Andrew Pinski Subject: [PATCHv2 1/2] cfgexpand: Handle scope conflicts better [PR111422] Date: Wed, 16 Oct 2024 19:42:04 -0700 Message-ID: <20241017024205.2660484-1-quic_apinski@quicinc.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To nasanex01c.na.qualcomm.com (10.45.79.139) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-GUID: fpUWQqPV-4T8i3yxg5yuAp72HlndA48d X-Proofpoint-ORIG-GUID: fpUWQqPV-4T8i3yxg5yuAp72HlndA48d X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 lowpriorityscore=0 bulkscore=0 adultscore=0 clxscore=1015 phishscore=0 spamscore=0 impostorscore=0 suspectscore=0 mlxlogscore=999 malwarescore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2409260000 definitions=main-2410170018 X-Spam-Status: No, score=-14.0 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_LOW, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org After fixing loop-im to do the correct overflow rewriting for pointer types too. We end up with code like: ``` _9 = (unsigned long) &g; _84 = _9 + 18446744073709551615; _11 = _42 + _84; _44 = (signed char *) _11; ... *_44 = 10; g ={v} {CLOBBER(eos)}; ... n[0] = &f; *_44 = 8; g ={v} {CLOBBER(eos)}; ``` Which was not being recongized by the scope conflicts code. This was because it only handled one level walk backs rather than multiple ones. This fixes it by using a work_list to avoid huge recursion and a visited bitmap to avoid going into an infinite loops when dealing with loops. Adds a cache for the addr_expr's that are associated with each ssa name. I found 2 element cache was the decent trade off for size and speed. Most ssa names will have only one address associated with it but there are times (phis) where 2 or more will be there. But 2 is the common case for most if statements. gcc/ChangeLog: PR middle-end/111422 * cfgexpand.cc: Define INCLUDE_STRING if ADDR_WALKER_STATS is defined. (class addr_ssa_walker): New class. (add_scope_conflicts_2): Rename to ... (addr_ssa_walker::operator()): This and rewrite to be a full walk of all operands and their uses and use a cache. (add_scope_conflicts_1): Add walker new argument for the addr cache. Just walk the phi result since that will include all addr_exprs. Change call to add_scope_conflicts_2 to walker. (add_scope_conflicts): Add walker variable and update call to add_scope_conflicts_1. Signed-off-by: Andrew Pinski --- gcc/cfgexpand.cc | 207 ++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 176 insertions(+), 31 deletions(-) diff --git a/gcc/cfgexpand.cc b/gcc/cfgexpand.cc index 6c1096363af..74f4cfc0f22 100644 --- a/gcc/cfgexpand.cc +++ b/gcc/cfgexpand.cc @@ -17,6 +17,9 @@ You should have received a copy of the GNU General Public License along with GCC; see the file COPYING3. If not see . */ +#ifdef ADDR_WALKER_STATS +#define INCLUDE_STRING +#endif #include "config.h" #include "system.h" #include "coretypes.h" @@ -571,35 +574,175 @@ visit_conflict (gimple *, tree op, tree, void *data) return false; } -/* Helper function for add_scope_conflicts_1. For USE on - a stmt, if it is a SSA_NAME and in its SSA_NAME_DEF_STMT is known to be - based on some ADDR_EXPR, invoke VISIT on that ADDR_EXPR. */ +namespace { -static inline void -add_scope_conflicts_2 (tree use, bitmap work, - walk_stmt_load_store_addr_fn visit) +class addr_ssa_walker +{ +private: + struct addr_cache + { + private: + unsigned elems = 0; + static constexpr unsigned maxelements = 2; + bool visited = false; + tree cached[maxelements] = {}; + public: + /* Returns true if the cache is valid. */ + operator bool () + { + return visited && elems <= maxelements; + } + /* Mark as visited. The cache might be invalidated + by adding too many elements though. */ + void visit () { visited = true; } + /* Iterator over the cached values. */ + tree *begin () { return &cached[0]; } + tree *end () + { + /* If there was too many elements, then there are + nothing to vist in the cache. */ + if (elems > maxelements) + return &cached[0]; + return &cached[elems]; + } + /* Add ADDR_EXPR to the cache if it is not there already. */ + void add (tree addr) + { + if (elems > maxelements) + { + statistics_counter_event (cfun, "addr_walker already overflow", 1); + return; + } + /* Skip if the cache already contains the addr_expr. */ + for(tree other : *this) + if (operand_equal_p (other, addr)) + return; + elems++; + /* Record that the cache overflowed. */ + if (elems > maxelements) + { + statistics_counter_event (cfun, "addr_walker overflow", 1); + return; + } + cached[elems - 1] = addr; + } + }; +public: + addr_ssa_walker () : cache (new addr_cache[num_ssa_names]{}) { } + ~addr_ssa_walker (){ delete[] cache; } + + /* Walk the name and its defining statement, + call func with for addr_expr's. */ + template + void operator ()(tree name, T func); + +private: + + /* Cannot create a copy. */ + addr_ssa_walker (const addr_ssa_walker &) = delete; + addr_ssa_walker (addr_ssa_walker &&) = delete; + /* Return the cache entries for a SSA NAME. */ + addr_cache &operator[] (tree name) + { + return cache[SSA_NAME_VERSION (name)]; + } + + addr_cache *cache; +}; + +/* Walk backwards on the defining statements of NAME + and call FUNC on the addr_expr. Use the cache for + the SSA name if possible. */ + +template +void +addr_ssa_walker::operator() (tree name, T func) { - if (TREE_CODE (use) == SSA_NAME - && (POINTER_TYPE_P (TREE_TYPE (use)) - || INTEGRAL_TYPE_P (TREE_TYPE (use)))) + gcc_assert (TREE_CODE (name) == SSA_NAME); + auto_vec, 4> work_list; + auto_bitmap visited_ssa_names; + work_list.safe_push (std::make_pair (name, name)); + +#ifdef ADDR_WALKER_STATS + unsigned process_list = 0; +#endif + + while (!work_list.is_empty()) { + auto work = work_list.pop(); + tree use = work.first; + tree old_name = work.second; +#ifdef ADDR_WALKER_STATS + process_list++; +#endif + + if (!use) + continue; + /* For addr_expr's call the function and update the cache. */ + if (TREE_CODE (use) == ADDR_EXPR) + { + func (use); + (*this)[old_name].add (use); + continue; + } + /* Ignore all non SSA names. */ + if (TREE_CODE (use) != SSA_NAME) + continue; + + /* Only Pointers and integral types are used to track addresses. */ + if (!POINTER_TYPE_P (TREE_TYPE (use)) + && !INTEGRAL_TYPE_P (TREE_TYPE (use))) + continue; + + /* Check the cache, if there is a hit use it. */ + if ((*this)[use]) + { + statistics_counter_event (cfun, "addr_walker cache hit", 1); + /* Call the function and update the cache. */ + for (tree naddr : (*this)[use]) + { + (*this)[old_name].add (naddr); + func (naddr); + } + continue; + } gimple *g = SSA_NAME_DEF_STMT (use); + /* Mark the use as being visited so even if the cache is empty, + there is no reason to walk the names backwards. */ + (*this)[use].visit(); + /* Skip the name if already visted this time. */ + if (!bitmap_set_bit (visited_ssa_names, SSA_NAME_VERSION (use))) + continue; + /* Need to update the old_name afterwards add it to the work list, + this will either hit the cache or the check bitmap will skip it + if there was too many names associated with the cache. */ + work_list.safe_push (work); + + /* For assign statements, add each operand to the work list. + Note operand 0 is the same as the use here so there is nothing + to be done. */ if (gassign *a = dyn_cast (g)) { - if (tree op = gimple_assign_rhs1 (a)) - if (TREE_CODE (op) == ADDR_EXPR) - visit (a, TREE_OPERAND (op, 0), op, work); + for (unsigned i = 1; i < gimple_num_ops (g); i++) + work_list.safe_push (std::make_pair (gimple_op (a, i), use)); } + /* PHI nodes, add each argument to the work list. */ else if (gphi *p = dyn_cast (g)) for (unsigned i = 0; i < gimple_phi_num_args (p); ++i) - if (TREE_CODE (use = gimple_phi_arg_def (p, i)) == SSA_NAME) - if (gassign *a = dyn_cast (SSA_NAME_DEF_STMT (use))) - { - if (tree op = gimple_assign_rhs1 (a)) - if (TREE_CODE (op) == ADDR_EXPR) - visit (a, TREE_OPERAND (op, 0), op, work); - } - } + work_list.safe_push (std::make_pair (gimple_phi_arg_def (p, i), use)); + /* All other kind of statements assume they can't contribute to an address + being alive. */ + } + /* This stat is here to see how long is the longest walk, + it is not useful stat except when tuning + addr_ssa_walker::addr_cache::maxelements. */ +#ifdef ADDR_WALKER_STATS + statistics_counter_event (cfun, + ("addr_walker process " + std::to_string (process_list)).c_str (), + 1); +#endif +} + } /* Helper routine for add_scope_conflicts, calculating the active partitions @@ -608,7 +751,8 @@ add_scope_conflicts_2 (tree use, bitmap work, liveness. */ static void -add_scope_conflicts_1 (basic_block bb, bitmap work, bool for_conflict) +add_scope_conflicts_1 (basic_block bb, bitmap work, bool for_conflict, + addr_ssa_walker &walker) { edge e; edge_iterator ei; @@ -623,14 +767,14 @@ add_scope_conflicts_1 (basic_block bb, bitmap work, bool for_conflict) visit = visit_op; + auto addrvisitor = [&visit,&work](tree addr) { + gcc_assert (TREE_CODE (addr) == ADDR_EXPR); + visit (nullptr, TREE_OPERAND (addr, 0), addr, work); + }; + + /* Walk over the phi for the incoming addresses to be alive. */ for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi)) - { - gimple *stmt = gsi_stmt (gsi); - gphi *phi = as_a (stmt); - walk_stmt_load_store_addr_ops (stmt, work, NULL, NULL, visit); - FOR_EACH_PHI_ARG (use_p, phi, iter, SSA_OP_USE) - add_scope_conflicts_2 (USE_FROM_PTR (use_p), work, visit); - } + walker (gimple_phi_result (gsi_stmt (gsi)), addrvisitor); for (gsi = gsi_after_labels (bb); !gsi_end_p (gsi); gsi_next (&gsi)) { gimple *stmt = gsi_stmt (gsi); @@ -676,7 +820,7 @@ add_scope_conflicts_1 (basic_block bb, bitmap work, bool for_conflict) } walk_stmt_load_store_addr_ops (stmt, work, visit, visit, visit); FOR_EACH_SSA_USE_OPERAND (use_p, stmt, iter, SSA_OP_USE) - add_scope_conflicts_2 (USE_FROM_PTR (use_p), work, visit); + walker (USE_FROM_PTR (use_p), addrvisitor); } } @@ -707,6 +851,7 @@ add_scope_conflicts (void) bitmap work = BITMAP_ALLOC (NULL); int *rpo; int n_bbs; + addr_ssa_walker walker; /* We approximate the live range of a stack variable by taking the first mention of its name as starting point(s), and by the end-of-scope @@ -734,14 +879,14 @@ add_scope_conflicts (void) bitmap active; bb = BASIC_BLOCK_FOR_FN (cfun, rpo[i]); active = (bitmap)bb->aux; - add_scope_conflicts_1 (bb, work, false); + add_scope_conflicts_1 (bb, work, false, walker); if (bitmap_ior_into (active, work)) changed = true; } } FOR_EACH_BB_FN (bb, cfun) - add_scope_conflicts_1 (bb, work, true); + add_scope_conflicts_1 (bb, work, true, walker); free (rpo); BITMAP_FREE (work);