From patchwork Tue Nov 3 15:19:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew MacLeod X-Patchwork-Id: 1393129 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=gcc-patches-bounces@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gcc.gnu.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.a=rsa-sha256 header.s=default header.b=OwNwbmXu; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4CQYNj3Wd9z9sT6 for ; Wed, 4 Nov 2020 02:19:32 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id A72C839BA439; Tue, 3 Nov 2020 15:19:29 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org A72C839BA439 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1604416769; bh=klBN8gj8lpH7UudoXR9N3WYFKFCm+pHttqSLDBFJ3xo=; h=Subject:To:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=OwNwbmXuUiTXI0J1vpJ9oKSkx2xSE3kHWfLqf1ZtShgNfzio0cwRPz70F0S41IMIC 1ajg+Tuq2hEGg4dtpucHvRzsgeyw4ptSW6IaLIZJhiElyGNAxlbmj5ZHljFVb8HBZs hiqkeK51rmJRAUz88gSzYYBCxV9YTLG91ek1E6Mo= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by sourceware.org (Postfix) with ESMTP id 840593986808 for ; Tue, 3 Nov 2020 15:19:25 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 840593986808 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-420-4MDVH9bwNvqxIhR3OJlKVw-1; Tue, 03 Nov 2020 10:19:21 -0500 X-MC-Unique: 4MDVH9bwNvqxIhR3OJlKVw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D1E926D593 for ; Tue, 3 Nov 2020 15:19:20 +0000 (UTC) Received: from [10.10.113.12] (ovpn-113-12.rdu2.redhat.com [10.10.113.12]) by smtp.corp.redhat.com (Postfix) with ESMTP id B45B46EF46; Tue, 3 Nov 2020 15:19:19 +0000 (UTC) Subject: [PATCH] Tweaks to ranger cache To: gcc-patches Message-ID: <16df679e-e138-d6b5-e2f5-6bfa09216767@redhat.com> Date: Tue, 3 Nov 2020 10:19:18 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US X-Spam-Status: No, score=-12.9 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Andrew MacLeod via Gcc-patches From: Andrew MacLeod Reply-To: Andrew MacLeod Errors-To: gcc-patches-bounces@gcc.gnu.org Sender: "Gcc-patches" This patch does some minor tweaking to the ranger caches. 1 - the ssa_block_range class is a vector over the BB's and contains pointers to any calculated on-entry ranges for an ssa-name.    The class wasn't doing any bounds checking... Its a pretty controlled enviroment, but still... safer to have a check just in case we stumble across a place where we are working in a new BB unexpectedly. 2 - The on entry cache consist of a vector of ssa_block_ranges.. whenever a name is accessed for the first time, it's vector is allocated, full of NULL range pointers. .  There are many time we are making a query as to whether there is a range or not...   And we don't need to do the allocation in order to make these checks.  THis actually picked up a significant amount of time.   Enough to absorb some extra work I am going to add later :-) 3 - Ther Ranger cache was simply exporting its 3 component caches publicly for the ranger to consume until I got to fixing it. . This patch also privatizes the global range cache and the on-entry block class.   - The global cache is now accessed thru a new ranger_cache get and set routine,   -  and the on-entry cache was being accessed purely for its dump listing, so that is now hidden behind a new ranger cache dump facility. There are no real functional changes.. this is all fairly superficial, but important for follow on stuff.  We'll see if the asserts trigger anywhere. Bootstrapped on x86_64-pc-linux-gnu, no regressions.  pushed. Andrew commit d0a90d8e40a9024ed9297b63a34ac9b0f080ed5b Author: Andrew MacLeod Date: Mon Nov 2 13:06:46 2020 -0500 Tweaks to ranger cache Add some bounds checking to ssa_block_ranges, and privatize the ranges block cache and global cache, adding API points for accessing them. * gimple-range-cache.h (block_range_cache): Add new entry point. (ranger_cache): Privatize global abnd block cache members. * gimple-range-cache.cc (ssa_block_ranges::set_bb_range): Add bounds check. (ssa_block_ranges::set_bb_varying): Ditto. (ssa_block_ranges::get_bb_range): Ditto. (ssa_block_ranges::bb_range_p): Ditto. (block_range_cache::get_block_ranges): Fix formatting. (block_range_cache::query_block_ranges): New. (block_range_cache::get_bb_range): Use Query_block_ranges. (block_range_cache::bb_range_p): Ditto. (ranger_cache::dump): New. (ranger_cache::get_global_range): New. (ranger_cache::set_global_range): New. * gimple-range.cc (gimple_ranger::range_of_expr): Use new API. (gimple_ranger::range_of_stmt): Ditto. (gimple_ranger::export_global_ranges): Ditto. (gimple_ranger::dump): Ditto. diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc index bc9243c1279..574debbc166 100644 --- a/gcc/gimple-range-cache.cc +++ b/gcc/gimple-range-cache.cc @@ -165,6 +165,7 @@ ssa_block_ranges::~ssa_block_ranges () void ssa_block_ranges::set_bb_range (const basic_block bb, const irange &r) { + gcc_checking_assert ((unsigned) bb->index < m_tab.length ()); irange *m = m_irange_allocator->allocate (r); m_tab[bb->index] = m; } @@ -174,6 +175,7 @@ ssa_block_ranges::set_bb_range (const basic_block bb, const irange &r) void ssa_block_ranges::set_bb_varying (const basic_block bb) { + gcc_checking_assert ((unsigned) bb->index < m_tab.length ()); m_tab[bb->index] = m_type_range; } @@ -183,6 +185,7 @@ ssa_block_ranges::set_bb_varying (const basic_block bb) bool ssa_block_ranges::get_bb_range (irange &r, const basic_block bb) { + gcc_checking_assert ((unsigned) bb->index < m_tab.length ()); irange *m = m_tab[bb->index]; if (m) { @@ -197,6 +200,7 @@ ssa_block_ranges::get_bb_range (irange &r, const basic_block bb) bool ssa_block_ranges::bb_range_p (const basic_block bb) { + gcc_checking_assert ((unsigned) bb->index < m_tab.length ()); return m_tab[bb->index] != NULL; } @@ -244,8 +248,8 @@ block_range_cache::~block_range_cache () m_ssa_ranges.release (); } -// Return a reference to the m_block_cache for NAME. If it has not been -// accessed yet, allocate it. +// Return a reference to the ssa_block_cache for NAME. If it has not been +// accessed yet, allocate it first. ssa_block_ranges & block_range_cache::get_block_ranges (tree name) @@ -255,11 +259,24 @@ block_range_cache::get_block_ranges (tree name) m_ssa_ranges.safe_grow_cleared (num_ssa_names + 1); if (!m_ssa_ranges[v]) - m_ssa_ranges[v] = new ssa_block_ranges (TREE_TYPE (name), m_irange_allocator); - + m_ssa_ranges[v] = new ssa_block_ranges (TREE_TYPE (name), + m_irange_allocator); return *(m_ssa_ranges[v]); } + +// Return a pointer to the ssa_block_cache for NAME. If it has not been +// accessed yet, return NULL. + +ssa_block_ranges * +block_range_cache::query_block_ranges (tree name) +{ + unsigned v = SSA_NAME_VERSION (name); + if (v >= m_ssa_ranges.length () || !m_ssa_ranges[v]) + return NULL; + return m_ssa_ranges[v]; +} + // Set the range for NAME on entry to block BB to R. void @@ -283,7 +300,10 @@ block_range_cache::set_bb_varying (tree name, const basic_block bb) bool block_range_cache::get_bb_range (irange &r, tree name, const basic_block bb) { - return get_block_ranges (name).get_bb_range (r, bb); + ssa_block_ranges *ptr = query_block_ranges (name); + if (ptr) + return ptr->get_bb_range (r, bb); + return false; } // Return true if NAME has a range set in block BB. @@ -291,7 +311,10 @@ block_range_cache::get_bb_range (irange &r, tree name, const basic_block bb) bool block_range_cache::bb_range_p (tree name, const basic_block bb) { - return get_block_ranges (name).bb_range_p (bb); + ssa_block_ranges *ptr = query_block_ranges (name); + if (ptr) + return ptr->bb_range_p (bb); + return false; } // Print all known block caches to file F. @@ -472,6 +495,46 @@ ranger_cache::~ranger_cache () m_update_list.release (); } +// Dump the global caches to file F. if GORI_DUMP is true, dump the +// gori map as well. + +void +ranger_cache::dump (FILE *f, bool gori_dump) +{ + m_globals.dump (f); + if (gori_dump) + { + fprintf (f, "\nDUMPING GORI MAP\n"); + gori_compute::dump (f); + } + fprintf (f, "\n"); +} + +// Dump the caches for basic block BB to file F. + +void +ranger_cache::dump (FILE *f, basic_block bb) +{ + m_on_entry.dump (f, bb); +} + +// Get the global range for NAME, and return in R. Return false if the +// global range is not set. + +bool +ranger_cache::get_global_range (irange &r, tree name) const +{ + return m_globals.get_global_range (r, name); +} + +// Set the global range of NAME to R. + +void +ranger_cache::set_global_range (tree name, const irange &r) +{ + m_globals.set_global_range (name, r); +} + // Push a request for a new lookup in block BB of name. Return true if // the request is actually made (ie, isn't a duplicate). @@ -869,5 +932,4 @@ ranger_cache::fill_block_cache (tree name, basic_block bb, basic_block def_bb) iterative_cache_update (name); } } - } diff --git a/gcc/gimple-range-cache.h b/gcc/gimple-range-cache.h index 29ab01e2a98..599a2926b53 100644 --- a/gcc/gimple-range-cache.h +++ b/gcc/gimple-range-cache.h @@ -60,6 +60,7 @@ public: private: vec m_ssa_ranges; ssa_block_ranges &get_block_ranges (tree name); + ssa_block_ranges *query_block_ranges (tree name); irange_allocator *m_irange_allocator; }; @@ -95,10 +96,16 @@ public: virtual void ssa_range_in_bb (irange &r, tree name, basic_block bb); bool block_range (irange &r, basic_block bb, tree name, bool calc = true); - ssa_global_cache m_globals; - block_range_cache m_on_entry; + bool get_global_range (irange &r, tree name) const; + void set_global_range (tree name, const irange &r); + non_null_ref m_non_null; + + void dump (FILE *f, bool dump_gori = true); + void dump (FILE *f, basic_block bb); private: + ssa_global_cache m_globals; + block_range_cache m_on_entry; void add_to_update (basic_block bb); void fill_block_cache (tree name, basic_block bb, basic_block def_bb); void iterative_cache_update (tree name); diff --git a/gcc/gimple-range.cc b/gcc/gimple-range.cc index cf979845acf..8fdcc310111 100644 --- a/gcc/gimple-range.cc +++ b/gcc/gimple-range.cc @@ -897,7 +897,7 @@ gimple_ranger::range_of_expr (irange &r, tree expr, gimple *stmt) // If there is no statement, just get the global value. if (!stmt) { - if (!m_cache.m_globals.get_global_range (r, expr)) + if (!m_cache.get_global_range (r, expr)) r = gimple_range_global (expr); return true; } @@ -1010,18 +1010,18 @@ gimple_ranger::range_of_stmt (irange &r, gimple *s, tree name) return false; // If this STMT has already been processed, return that value. - if (m_cache.m_globals.get_global_range (r, name)) + if (m_cache.get_global_range (r, name)) return true; // Avoid infinite recursion by initializing global cache int_range_max tmp = gimple_range_global (name); - m_cache.m_globals.set_global_range (name, tmp); + m_cache.set_global_range (name, tmp); calc_stmt (r, s, name); if (is_a (s)) r.intersect (tmp); - m_cache.m_globals.set_global_range (name, r); + m_cache.set_global_range (name, r); return true; } @@ -1044,7 +1044,7 @@ gimple_ranger::export_global_ranges () tree name = ssa_name (x); if (name && !SSA_NAME_IN_FREE_LIST (name) && gimple_range_ssa_p (name) - && m_cache.m_globals.get_global_range (r, name) + && m_cache.get_global_range (r, name) && !r.varying_p()) { // Make sure the new range is a subset of the old range. @@ -1088,7 +1088,7 @@ gimple_ranger::dump (FILE *f) edge e; int_range_max range; fprintf (f, "\n=========== BB %d ============\n", bb->index); - m_cache.m_on_entry.dump (f, bb); + m_cache.dump (f, bb); dump_bb (f, bb, 4, TDF_NONE); @@ -1098,7 +1098,7 @@ gimple_ranger::dump (FILE *f) tree name = ssa_name (x); if (gimple_range_ssa_p (name) && SSA_NAME_DEF_STMT (name) && gimple_bb (SSA_NAME_DEF_STMT (name)) == bb && - m_cache.m_globals.get_global_range (range, name)) + m_cache.get_global_range (range, name)) { if (!range.varying_p ()) { @@ -1150,15 +1150,7 @@ gimple_ranger::dump (FILE *f) } } - m_cache.m_globals.dump (dump_file); - fprintf (f, "\n"); - - if (dump_flags & TDF_DETAILS) - { - fprintf (f, "\nDUMPING GORI MAP\n"); - m_cache.dump (f); - fprintf (f, "\n"); - } + m_cache.dump (dump_file, (dump_flags & TDF_DETAILS) != 0); } // If SCEV has any information about phi node NAME, return it as a range in R.